Jan 27 20:07:33 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 20:07:33 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:33 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:34 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 20:07:35 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 20:07:35 crc kubenswrapper[4858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 20:07:35 crc kubenswrapper[4858]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 20:07:35 crc kubenswrapper[4858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 20:07:35 crc kubenswrapper[4858]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 20:07:35 crc kubenswrapper[4858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 20:07:35 crc kubenswrapper[4858]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.771566 4858 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775731 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775750 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775756 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775760 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775763 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775769 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775774 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775779 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775782 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775786 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775791 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775795 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775799 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775811 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775817 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775821 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775825 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775828 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775832 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775836 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775841 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775846 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775854 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775861 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775866 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775870 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775875 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775879 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775883 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775888 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775892 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775896 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775901 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775906 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775910 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775915 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775919 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775925 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775929 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775935 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775939 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775943 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775949 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775955 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775961 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775967 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775971 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775975 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775979 4858 feature_gate.go:330] unrecognized feature gate: Example Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775983 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775986 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775990 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775993 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.775997 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776000 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776004 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776007 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776010 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776014 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776017 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776022 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776025 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776028 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776032 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776036 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776040 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776044 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776048 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776051 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776054 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.776059 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778486 4858 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778502 4858 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778590 4858 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778596 4858 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778602 4858 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778607 4858 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778613 4858 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778620 4858 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778624 4858 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778629 4858 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778633 4858 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778637 4858 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778642 4858 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778646 4858 flags.go:64] FLAG: --cgroup-root="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778650 4858 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778655 4858 flags.go:64] FLAG: --client-ca-file="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778660 4858 flags.go:64] FLAG: --cloud-config="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778664 4858 flags.go:64] FLAG: --cloud-provider="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778668 4858 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778676 4858 flags.go:64] FLAG: --cluster-domain="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778680 4858 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778685 4858 flags.go:64] FLAG: --config-dir="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778690 4858 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778695 4858 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778701 4858 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778705 4858 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778710 4858 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778714 4858 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778719 4858 flags.go:64] FLAG: --contention-profiling="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778723 4858 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778727 4858 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778731 4858 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778736 4858 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778741 4858 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778745 4858 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778750 4858 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778754 4858 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778758 4858 flags.go:64] FLAG: --enable-server="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778762 4858 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778767 4858 flags.go:64] FLAG: --event-burst="100" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778771 4858 flags.go:64] FLAG: --event-qps="50" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778775 4858 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778779 4858 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778784 4858 flags.go:64] FLAG: --eviction-hard="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778789 4858 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778794 4858 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778798 4858 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778803 4858 flags.go:64] FLAG: --eviction-soft="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778807 4858 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778811 4858 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778816 4858 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778820 4858 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778824 4858 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778828 4858 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778832 4858 flags.go:64] FLAG: --feature-gates="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778837 4858 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778841 4858 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778845 4858 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778850 4858 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778854 4858 flags.go:64] FLAG: --healthz-port="10248" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778858 4858 flags.go:64] FLAG: --help="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778863 4858 flags.go:64] FLAG: --hostname-override="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778867 4858 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778871 4858 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778876 4858 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778881 4858 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778884 4858 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778889 4858 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778894 4858 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778898 4858 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778902 4858 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778906 4858 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778910 4858 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778914 4858 flags.go:64] FLAG: --kube-reserved="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778918 4858 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778922 4858 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778926 4858 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778930 4858 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778934 4858 flags.go:64] FLAG: --lock-file="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778939 4858 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778943 4858 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778947 4858 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778954 4858 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778958 4858 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778962 4858 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778966 4858 flags.go:64] FLAG: --logging-format="text" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778970 4858 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778974 4858 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778978 4858 flags.go:64] FLAG: --manifest-url="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778982 4858 flags.go:64] FLAG: --manifest-url-header="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778987 4858 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778991 4858 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.778996 4858 flags.go:64] FLAG: --max-pods="110" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779001 4858 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779005 4858 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779009 4858 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779013 4858 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779017 4858 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779021 4858 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779026 4858 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779035 4858 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779039 4858 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779043 4858 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779047 4858 flags.go:64] FLAG: --pod-cidr="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779052 4858 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779059 4858 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779063 4858 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779067 4858 flags.go:64] FLAG: --pods-per-core="0" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779071 4858 flags.go:64] FLAG: --port="10250" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779075 4858 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779079 4858 flags.go:64] FLAG: --provider-id="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779083 4858 flags.go:64] FLAG: --qos-reserved="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779087 4858 flags.go:64] FLAG: --read-only-port="10255" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779091 4858 flags.go:64] FLAG: --register-node="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779096 4858 flags.go:64] FLAG: --register-schedulable="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779100 4858 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779107 4858 flags.go:64] FLAG: --registry-burst="10" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779111 4858 flags.go:64] FLAG: --registry-qps="5" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779115 4858 flags.go:64] FLAG: --reserved-cpus="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779119 4858 flags.go:64] FLAG: --reserved-memory="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779124 4858 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779128 4858 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779132 4858 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779136 4858 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779140 4858 flags.go:64] FLAG: --runonce="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779145 4858 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779149 4858 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779153 4858 flags.go:64] FLAG: --seccomp-default="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779158 4858 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779162 4858 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779166 4858 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779170 4858 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779174 4858 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779178 4858 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779182 4858 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779187 4858 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779191 4858 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779195 4858 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779199 4858 flags.go:64] FLAG: --system-cgroups="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779202 4858 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779212 4858 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779216 4858 flags.go:64] FLAG: --tls-cert-file="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779220 4858 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779226 4858 flags.go:64] FLAG: --tls-min-version="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779230 4858 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779234 4858 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779238 4858 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779242 4858 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779246 4858 flags.go:64] FLAG: --v="2" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779252 4858 flags.go:64] FLAG: --version="false" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779257 4858 flags.go:64] FLAG: --vmodule="" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779262 4858 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779267 4858 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779379 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779384 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779388 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779394 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779398 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779402 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779406 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779410 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779414 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779417 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779421 4858 feature_gate.go:330] unrecognized feature gate: Example Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779425 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779429 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779434 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779438 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779442 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779446 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779449 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779453 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779457 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779460 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779464 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779468 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779472 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779476 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779479 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779483 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779486 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779490 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779494 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779497 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779501 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779504 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779507 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779511 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779515 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779518 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779522 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779526 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779529 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779533 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779536 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779540 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779557 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779568 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779576 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779606 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779611 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779616 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779620 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779624 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779629 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779633 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779637 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779641 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779645 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779648 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779652 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779655 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779662 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779666 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779671 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779674 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779678 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779683 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779687 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779692 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779697 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779702 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779706 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.779721 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.779728 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.790031 4858 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.790077 4858 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790166 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790176 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790181 4858 feature_gate.go:330] unrecognized feature gate: Example Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790188 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790193 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790199 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790205 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790210 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790216 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790221 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790225 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790230 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790235 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790240 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790244 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790249 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790255 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790264 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790269 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790274 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790278 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790284 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790288 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790293 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790298 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790302 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790307 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790311 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790316 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790321 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790329 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790337 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790342 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790347 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790354 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790359 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790363 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790368 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790372 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790377 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790381 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790386 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790390 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790395 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790399 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790404 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790409 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790413 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790417 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790422 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790426 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790431 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790435 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790443 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790448 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790453 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790460 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790465 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790469 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790474 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790478 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790483 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790487 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790493 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790497 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790502 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790507 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790511 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790515 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790520 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790527 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.790536 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790698 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790707 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790713 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790719 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790723 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790728 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790733 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790738 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790743 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790749 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790753 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790758 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790762 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790766 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790773 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790779 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790784 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790788 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790793 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790797 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790802 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790806 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790811 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790815 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790821 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790826 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790831 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790835 4858 feature_gate.go:330] unrecognized feature gate: Example Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790840 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790844 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790849 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790853 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790858 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790862 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790867 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790872 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790876 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790883 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790889 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790894 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790900 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790905 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790910 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790915 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790920 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790925 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790930 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790934 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790940 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790945 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790950 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790954 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790959 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790963 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790968 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790973 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790979 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790984 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790990 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.790996 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791002 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791008 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791013 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791018 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791024 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791028 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791034 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791039 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791044 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791048 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 20:07:35 crc kubenswrapper[4858]: W0127 20:07:35.791055 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.791062 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.792094 4858 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.797806 4858 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.797923 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.822603 4858 server.go:997] "Starting client certificate rotation" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.822660 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.822882 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 22:48:36.571082454 +0000 UTC Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.822965 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.901863 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 20:07:35 crc kubenswrapper[4858]: E0127 20:07:35.904843 4858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.907430 4858 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.925876 4858 log.go:25] "Validated CRI v1 runtime API" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.965131 4858 log.go:25] "Validated CRI v1 image API" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.967198 4858 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.971229 4858 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-20-01-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.971256 4858 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.986756 4858 manager.go:217] Machine: {Timestamp:2026-01-27 20:07:35.983115599 +0000 UTC m=+0.690931325 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e10118a3-8956-4599-b1a5-221ab0a35848 BootID:2b322549-2745-4c40-a90f-d799751df1f2 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:98:61:5f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:98:61:5f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3c:d5:05 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cf:b8:eb Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6c:9b:b1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7a:d1:49 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:3d:b5:fe:c8:8b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:26:22:be:d1:d7:11 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.987253 4858 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.987455 4858 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.987851 4858 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.988059 4858 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.988100 4858 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.988331 4858 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.988343 4858 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.989352 4858 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.989385 4858 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.989604 4858 state_mem.go:36] "Initialized new in-memory state store" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.989947 4858 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.995047 4858 kubelet.go:418] "Attempting to sync node with API server" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.995070 4858 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.995197 4858 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.995217 4858 kubelet.go:324] "Adding apiserver pod source" Jan 27 20:07:35 crc kubenswrapper[4858]: I0127 20:07:35.995229 4858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.004093 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.004120 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.004214 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.004247 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.008154 4858 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.009595 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.011093 4858 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012749 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012774 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012781 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012788 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012799 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012805 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012811 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012855 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012863 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012870 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012880 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.012886 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.015206 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.015777 4858 server.go:1280] "Started kubelet" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.016888 4858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.017116 4858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 20:07:36 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.017404 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.017577 4858 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.018817 4858 server.go:460] "Adding debug handlers to kubelet server" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.018959 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.019024 4858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.019092 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:46:36.554656245 +0000 UTC Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.019299 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.019672 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="200ms" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.019761 4858 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.019772 4858 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.019982 4858 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.020237 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.020434 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.020892 4858 factory.go:55] Registering systemd factory Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.020919 4858 factory.go:221] Registration of the systemd container factory successfully Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.022556 4858 factory.go:153] Registering CRI-O factory Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.022580 4858 factory.go:221] Registration of the crio container factory successfully Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.022730 4858 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.022754 4858 factory.go:103] Registering Raw factory Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.022772 4858 manager.go:1196] Started watching for new ooms in manager Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.023367 4858 manager.go:319] Starting recovery of all containers Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.029138 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.56:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eaf4eab148720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 20:07:36.015742752 +0000 UTC m=+0.723558458,LastTimestamp:2026-01-27 20:07:36.015742752 +0000 UTC m=+0.723558458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035610 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035675 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035691 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035705 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035751 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035766 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035781 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035794 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035814 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035828 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035841 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035855 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035868 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035886 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035900 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.035914 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039109 4858 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039166 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039185 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039200 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039214 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039229 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039241 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039254 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039272 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039286 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039298 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039317 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039356 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039370 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039385 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039397 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039413 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039427 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039440 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039472 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039485 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039498 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039511 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039524 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039536 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039568 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039613 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039627 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039639 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039653 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039666 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039686 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039701 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039754 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039769 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039782 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039795 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039814 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039827 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039840 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039853 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039866 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039880 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039892 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039905 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039918 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039932 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039944 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039960 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039973 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039986 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.039999 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040025 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040037 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040050 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040064 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040076 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040100 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040115 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040129 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040142 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040156 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040169 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040198 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040218 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040236 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040249 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040262 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040275 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040289 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040314 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040330 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040356 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040371 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040385 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040400 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040414 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040426 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040439 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040454 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040470 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040485 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040497 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040511 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040525 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040540 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040574 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040589 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040603 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040626 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040643 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040658 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040672 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040688 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040703 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040718 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040735 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040754 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040769 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040782 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040798 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040811 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040825 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040840 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040854 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040868 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040881 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040895 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040908 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040922 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040936 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040950 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040965 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040979 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.040991 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041006 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041019 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041033 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041046 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041061 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041074 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041089 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041104 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041117 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041131 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041148 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041164 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041180 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041196 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041210 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041224 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041325 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041342 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041355 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041389 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041409 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041424 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041438 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041451 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041463 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041475 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041496 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041511 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041522 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041537 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041575 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041590 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041605 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041617 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041629 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041642 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041655 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041669 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041688 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041701 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041716 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041730 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041744 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041757 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041771 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041786 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041799 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041812 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041827 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041841 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041856 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041869 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041883 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041896 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041910 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041922 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041935 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041948 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041961 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041975 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.041988 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042003 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042017 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042030 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042045 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042060 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042075 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042100 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042112 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042124 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042146 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042159 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042179 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042192 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042205 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042218 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042230 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042244 4858 reconstruct.go:97] "Volume reconstruction finished" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.042253 4858 reconciler.go:26] "Reconciler: start to sync state" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.047001 4858 manager.go:324] Recovery completed Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.058618 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.061198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.061265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.061284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.062336 4858 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.062381 4858 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.062406 4858 state_mem.go:36] "Initialized new in-memory state store" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.067502 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.069534 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.069660 4858 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.069690 4858 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.069749 4858 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.070691 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.070768 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.086906 4858 policy_none.go:49] "None policy: Start" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.088133 4858 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.088167 4858 state_mem.go:35] "Initializing new in-memory state store" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.120493 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.144763 4858 manager.go:334] "Starting Device Plugin manager" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.144832 4858 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.144847 4858 server.go:79] "Starting device plugin registration server" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.145213 4858 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.145236 4858 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.145470 4858 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.145594 4858 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.145612 4858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.155162 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.170372 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.170520 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.171659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.171699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.171711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.171871 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172060 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172880 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.172995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.173022 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.173497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.173535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.173558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.173690 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.174187 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.174220 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.174977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175123 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175262 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175301 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.175984 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.176004 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.176517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.176536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.176543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.176617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.176634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.176644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.220197 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="400ms" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.244820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.244875 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245159 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245203 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245227 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245283 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245320 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245325 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245488 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245511 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.245530 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.246867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.246902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.246912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.246937 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.247303 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.56:6443: connect: connection refused" node="crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.346916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.346979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347004 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347021 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347081 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347330 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347330 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347260 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347260 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347298 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.347855 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.447472 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.448987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.449024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.449036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.449060 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.449349 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.56:6443: connect: connection refused" node="crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.499760 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.517628 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.537061 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.545117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.552484 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.568184 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a0e2e93da7b28fb5477d9eb48b3b51c137277316c26f2a3fb707fef2232d1bac WatchSource:0}: Error finding container a0e2e93da7b28fb5477d9eb48b3b51c137277316c26f2a3fb707fef2232d1bac: Status 404 returned error can't find the container with id a0e2e93da7b28fb5477d9eb48b3b51c137277316c26f2a3fb707fef2232d1bac Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.576211 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5075371af9ad5388926a9f135605dc5dde512ef73929f33f0b4bc73d86c5a722 WatchSource:0}: Error finding container 5075371af9ad5388926a9f135605dc5dde512ef73929f33f0b4bc73d86c5a722: Status 404 returned error can't find the container with id 5075371af9ad5388926a9f135605dc5dde512ef73929f33f0b4bc73d86c5a722 Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.580660 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-aaf1b9089ee9fc334d089d778797e04947dac4124b2466d6feab95e10d78f05f WatchSource:0}: Error finding container aaf1b9089ee9fc334d089d778797e04947dac4124b2466d6feab95e10d78f05f: Status 404 returned error can't find the container with id aaf1b9089ee9fc334d089d778797e04947dac4124b2466d6feab95e10d78f05f Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.581433 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-939d2ce7a176778a89d61ad699a8dbbd9fc5e68d5ba7b8771f47d87290e5bd36 WatchSource:0}: Error finding container 939d2ce7a176778a89d61ad699a8dbbd9fc5e68d5ba7b8771f47d87290e5bd36: Status 404 returned error can't find the container with id 939d2ce7a176778a89d61ad699a8dbbd9fc5e68d5ba7b8771f47d87290e5bd36 Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.621518 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="800ms" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.850092 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.851917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.851957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.851968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:36 crc kubenswrapper[4858]: I0127 20:07:36.851993 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.852568 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.56:6443: connect: connection refused" node="crc" Jan 27 20:07:36 crc kubenswrapper[4858]: W0127 20:07:36.970744 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:36 crc kubenswrapper[4858]: E0127 20:07:36.970871 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.018334 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.019205 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:54:09.278010891 +0000 UTC Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.074621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"939d2ce7a176778a89d61ad699a8dbbd9fc5e68d5ba7b8771f47d87290e5bd36"} Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.075826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aaf1b9089ee9fc334d089d778797e04947dac4124b2466d6feab95e10d78f05f"} Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.076993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5075371af9ad5388926a9f135605dc5dde512ef73929f33f0b4bc73d86c5a722"} Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.078026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"90c0ed447968a171a0e88ff6362db09335de586f70388e82a252663ab9554b0e"} Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.078886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a0e2e93da7b28fb5477d9eb48b3b51c137277316c26f2a3fb707fef2232d1bac"} Jan 27 20:07:37 crc kubenswrapper[4858]: W0127 20:07:37.082609 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:37 crc kubenswrapper[4858]: E0127 20:07:37.082687 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:37 crc kubenswrapper[4858]: W0127 20:07:37.403427 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:37 crc kubenswrapper[4858]: E0127 20:07:37.403529 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:37 crc kubenswrapper[4858]: W0127 20:07:37.407443 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:37 crc kubenswrapper[4858]: E0127 20:07:37.407559 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:37 crc kubenswrapper[4858]: E0127 20:07:37.422710 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="1.6s" Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.653290 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.654450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.654490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.654503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.654527 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:07:37 crc kubenswrapper[4858]: E0127 20:07:37.655000 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.56:6443: connect: connection refused" node="crc" Jan 27 20:07:37 crc kubenswrapper[4858]: I0127 20:07:37.977944 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 20:07:37 crc kubenswrapper[4858]: E0127 20:07:37.979173 4858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:38 crc kubenswrapper[4858]: I0127 20:07:38.018583 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:38 crc kubenswrapper[4858]: I0127 20:07:38.019599 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:29:18.269051682 +0000 UTC Jan 27 20:07:39 crc kubenswrapper[4858]: W0127 20:07:39.015399 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:39 crc kubenswrapper[4858]: E0127 20:07:39.015474 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.017863 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.019913 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:52:46.330143665 +0000 UTC Jan 27 20:07:39 crc kubenswrapper[4858]: E0127 20:07:39.023773 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="3.2s" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.087659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828"} Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.087819 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.090363 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.090389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.090454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.090478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.090747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055"} Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.091227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.091259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.091268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.091913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce"} Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.091996 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.093043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.093103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.093127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.093483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84"} Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.093602 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.094602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.094630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.094642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.095192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5"} Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.256144 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.257183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.257227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.257238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:39 crc kubenswrapper[4858]: I0127 20:07:39.257260 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:07:39 crc kubenswrapper[4858]: E0127 20:07:39.257753 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.56:6443: connect: connection refused" node="crc" Jan 27 20:07:39 crc kubenswrapper[4858]: W0127 20:07:39.567288 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:39 crc kubenswrapper[4858]: E0127 20:07:39.567399 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:39 crc kubenswrapper[4858]: W0127 20:07:39.712920 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:39 crc kubenswrapper[4858]: E0127 20:07:39.712999 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.018285 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.020449 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:32:05.655967224 +0000 UTC Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.100083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10"} Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.100130 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5"} Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.101961 4858 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828" exitCode=0 Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.102094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828"} Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.102215 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:40 crc kubenswrapper[4858]: W0127 20:07:40.102782 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:40 crc kubenswrapper[4858]: E0127 20:07:40.102903 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.103379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.103412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.103426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.104251 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055" exitCode=0 Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.104382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055"} Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.104439 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.105504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.105537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.105568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.106379 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce" exitCode=0 Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.106481 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.106474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce"} Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.107357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.107421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.107444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.108958 4858 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84" exitCode=0 Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.108989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84"} Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.109095 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.109660 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.110510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.110540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.110583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.110595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.110583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:40 crc kubenswrapper[4858]: I0127 20:07:40.110625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.018925 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.021045 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:43:35.547313808 +0000 UTC Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.114909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165"} Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.117002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"33d0a534dcd9e97a73f9b9fb89b269a118c6cd9d353f36be6699946cf46a8651"} Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.117094 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.117910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.117942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.117954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.119458 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692"} Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.119507 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.120497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.120567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.120579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.121978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c"} Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.122001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3"} Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.123497 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a" exitCode=0 Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.123542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a"} Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.123590 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.124338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.124373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:41 crc kubenswrapper[4858]: I0127 20:07:41.124387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.018485 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.021716 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:16:34.711255478 +0000 UTC Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.129905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63"} Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.129965 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.131752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.131778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.131788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.133385 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170" exitCode=0 Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.133650 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170"} Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.133739 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.134910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.135005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.135022 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.138026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3"} Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.138087 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.138085 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730"} Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.138231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47"} Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.138048 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.142376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.142528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.142640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.142716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.142745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.142760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:42 crc kubenswrapper[4858]: E0127 20:07:42.224406 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="6.4s" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.362251 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 20:07:42 crc kubenswrapper[4858]: E0127 20:07:42.363478 4858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.458656 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.460435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.460512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.460531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.460618 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:07:42 crc kubenswrapper[4858]: E0127 20:07:42.461523 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.56:6443: connect: connection refused" node="crc" Jan 27 20:07:42 crc kubenswrapper[4858]: I0127 20:07:42.980323 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.018676 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.022661 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:39:44.131531048 +0000 UTC Jan 27 20:07:43 crc kubenswrapper[4858]: E0127 20:07:43.118485 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.56:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eaf4eab148720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 20:07:36.015742752 +0000 UTC m=+0.723558458,LastTimestamp:2026-01-27 20:07:36.015742752 +0000 UTC m=+0.723558458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.148937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32"} Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.148980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e"} Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.148989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff"} Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.157575 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.158059 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.158482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2245bf54afe907ffc08914c2a3459bdc0bbe12c526ee2c1e57d20b852a98b645"} Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.158523 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.158601 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.159661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.159689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.159700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.160407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.160427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.160434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.160756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.160796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.160805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.164643 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:43 crc kubenswrapper[4858]: W0127 20:07:43.633772 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:43 crc kubenswrapper[4858]: E0127 20:07:43.633884 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:43 crc kubenswrapper[4858]: I0127 20:07:43.664084 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.018576 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.023756 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:40:45.258722994 +0000 UTC Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.163849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd"} Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.163915 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.163927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad"} Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.165061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.165098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.165115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.166382 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.168623 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2245bf54afe907ffc08914c2a3459bdc0bbe12c526ee2c1e57d20b852a98b645" exitCode=255 Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.168701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2245bf54afe907ffc08914c2a3459bdc0bbe12c526ee2c1e57d20b852a98b645"} Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.168737 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.168849 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.169417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.169439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.169448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.169963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.169991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.169999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:44 crc kubenswrapper[4858]: I0127 20:07:44.170487 4858 scope.go:117] "RemoveContainer" containerID="2245bf54afe907ffc08914c2a3459bdc0bbe12c526ee2c1e57d20b852a98b645" Jan 27 20:07:44 crc kubenswrapper[4858]: W0127 20:07:44.631961 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:07:44 crc kubenswrapper[4858]: E0127 20:07:44.632070 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.024638 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:18:34.578040848 +0000 UTC Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.135681 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.172923 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.174763 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69"} Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.174905 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.175018 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.175848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.175883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.175894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.176087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.176182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:45 crc kubenswrapper[4858]: I0127 20:07:45.176256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.024880 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:03:35.556307744 +0000 UTC Jan 27 20:07:46 crc kubenswrapper[4858]: E0127 20:07:46.155592 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.177262 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.177282 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.177316 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.178936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.178971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.178980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.179072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.179102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:46 crc kubenswrapper[4858]: I0127 20:07:46.179117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.025691 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:49:47.470260686 +0000 UTC Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.164007 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.179927 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.180700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.180738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.180750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.311927 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.312078 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.313128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.313156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.313167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.321921 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.322090 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.324033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.324089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:47 crc kubenswrapper[4858]: I0127 20:07:47.324104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.027021 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:37:12.204896612 +0000 UTC Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.181970 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.182814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.182852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.182863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.862384 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.863491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.863526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.863538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:48 crc kubenswrapper[4858]: I0127 20:07:48.863586 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.018664 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.018811 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.020045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.020099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.020112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.027450 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:21:17.17105409 +0000 UTC Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.041581 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.184489 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.185865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.185913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.185926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:49 crc kubenswrapper[4858]: I0127 20:07:49.188502 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:50 crc kubenswrapper[4858]: I0127 20:07:50.027567 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:30:57.221001869 +0000 UTC Jan 27 20:07:50 crc kubenswrapper[4858]: I0127 20:07:50.064947 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:07:50 crc kubenswrapper[4858]: I0127 20:07:50.187015 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:50 crc kubenswrapper[4858]: I0127 20:07:50.188108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:50 crc kubenswrapper[4858]: I0127 20:07:50.188169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:50 crc kubenswrapper[4858]: I0127 20:07:50.188179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:50 crc kubenswrapper[4858]: I0127 20:07:50.520599 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 20:07:51 crc kubenswrapper[4858]: I0127 20:07:51.028197 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:43:25.801239174 +0000 UTC Jan 27 20:07:51 crc kubenswrapper[4858]: I0127 20:07:51.189424 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:51 crc kubenswrapper[4858]: I0127 20:07:51.190189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:51 crc kubenswrapper[4858]: I0127 20:07:51.190235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:51 crc kubenswrapper[4858]: I0127 20:07:51.190249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:52 crc kubenswrapper[4858]: I0127 20:07:52.028649 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:56:24.768117178 +0000 UTC Jan 27 20:07:52 crc kubenswrapper[4858]: I0127 20:07:52.701803 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 20:07:52 crc kubenswrapper[4858]: I0127 20:07:52.701872 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 20:07:52 crc kubenswrapper[4858]: I0127 20:07:52.709271 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 20:07:52 crc kubenswrapper[4858]: I0127 20:07:52.709367 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 20:07:53 crc kubenswrapper[4858]: I0127 20:07:53.029155 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:08:55.569179289 +0000 UTC Jan 27 20:07:53 crc kubenswrapper[4858]: I0127 20:07:53.065576 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 20:07:53 crc kubenswrapper[4858]: I0127 20:07:53.065649 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 20:07:54 crc kubenswrapper[4858]: I0127 20:07:54.029791 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:13:25.521515544 +0000 UTC Jan 27 20:07:55 crc kubenswrapper[4858]: I0127 20:07:55.030695 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:42:13.143790321 +0000 UTC Jan 27 20:07:55 crc kubenswrapper[4858]: I0127 20:07:55.735943 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 20:07:56 crc kubenswrapper[4858]: I0127 20:07:56.031679 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:37:22.706226342 +0000 UTC Jan 27 20:07:56 crc kubenswrapper[4858]: E0127 20:07:56.155725 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 20:07:56 crc kubenswrapper[4858]: I0127 20:07:56.751872 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.032189 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:59:40.581854308 +0000 UTC Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.170058 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.170273 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.171677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.171735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.171794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.173789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.203491 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.204880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.204924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.204937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.345701 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.345967 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.347856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.347903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.347918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.359770 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.707520 4858 trace.go:236] Trace[16555270]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 20:07:45.346) (total time: 12361ms): Jan 27 20:07:57 crc kubenswrapper[4858]: Trace[16555270]: ---"Objects listed" error: 12361ms (20:07:57.707) Jan 27 20:07:57 crc kubenswrapper[4858]: Trace[16555270]: [12.361318283s] [12.361318283s] END Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.707575 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.707638 4858 trace.go:236] Trace[133065850]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 20:07:45.052) (total time: 12654ms): Jan 27 20:07:57 crc kubenswrapper[4858]: Trace[133065850]: ---"Objects listed" error: 12654ms (20:07:57.707) Jan 27 20:07:57 crc kubenswrapper[4858]: Trace[133065850]: [12.654731968s] [12.654731968s] END Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.707694 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.708464 4858 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 20:07:57 crc kubenswrapper[4858]: E0127 20:07:57.709841 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.712318 4858 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.743272 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41014->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.743339 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41014->192.168.126.11:17697: read: connection reset by peer" Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.743688 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 20:07:57 crc kubenswrapper[4858]: I0127 20:07:57.743716 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.005401 4858 apiserver.go:52] "Watching apiserver" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.015230 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.015835 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.016862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.016985 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.017042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.017084 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.017118 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.017165 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.017425 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.017828 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.017922 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.021161 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.021286 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.021342 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.021507 4858 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.021780 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.022260 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.022366 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.022426 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.022429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.022573 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.033068 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:18:44.723677147 +0000 UTC Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.046170 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.058867 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.069574 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.079352 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.089776 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.100209 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111542 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111583 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111636 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111669 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111688 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111703 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111723 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111754 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111784 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111816 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111847 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111863 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111933 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111990 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112006 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112152 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112182 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112204 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112227 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112249 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112487 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112507 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112524 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112539 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112601 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112620 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112646 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112673 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.112961 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113239 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113274 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113298 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113405 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113413 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113443 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113452 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113494 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113570 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113635 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113658 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113682 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113729 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114468 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114493 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114562 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114634 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114679 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114703 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114735 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114818 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114841 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114910 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114931 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115328 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115354 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115379 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115406 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115431 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115479 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115527 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115563 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115612 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115659 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115727 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115793 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115815 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115931 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115995 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116021 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116049 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116070 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116119 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116143 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116183 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116208 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116230 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116254 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116298 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116456 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116489 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116512 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116534 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116570 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116594 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116619 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116668 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116694 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116717 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116740 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116764 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116810 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116832 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116911 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116936 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116964 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116987 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117058 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117080 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117120 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117166 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117324 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117346 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117377 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117425 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117474 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117495 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117654 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117682 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117806 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117829 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117853 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117878 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117925 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117951 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117974 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.117999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118058 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118082 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118107 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118156 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118205 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118253 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118327 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118351 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118401 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118425 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118499 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118524 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118594 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.118625 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.111856 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139851 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139882 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113901 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.113910 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114157 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114281 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114599 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145359 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.114891 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115070 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145424 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115129 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115220 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115578 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.115943 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116163 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116231 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.116869 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139830 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139862 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139947 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139913 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139986 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.139999 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140065 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140137 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140330 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140491 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140495 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140756 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140776 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140868 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.140978 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141018 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141203 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141353 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141504 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141692 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141967 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.141974 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.142191 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.142324 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.142406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.142798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.143093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.143186 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.143746 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:07:58.643723992 +0000 UTC m=+23.351539698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.143983 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.144355 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.144615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.144664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.144860 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145770 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.144863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.144880 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145035 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145757 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146051 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147298 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146176 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146258 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146409 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146441 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147350 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.145739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146661 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.146813 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147287 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147420 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147441 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147352 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147689 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.148076 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.147901 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.148384 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.148858 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.148956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.149168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.149382 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.149677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.149819 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.150015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.150068 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.150228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.150356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.150440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.151070 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.151261 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.151603 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.151629 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.151766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.152381 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.152447 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.152465 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.152560 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.152584 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.152626 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.152849 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.153086 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.153331 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.153745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.153802 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.153839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.153894 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.154188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.154238 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.154252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.154324 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.154644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.154723 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.154923 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.155176 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.155240 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.155792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.155960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156469 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156618 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156713 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156748 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156836 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157188 4858 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157206 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157219 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157229 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157239 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157249 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157259 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.156746 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157178 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157645 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.157817 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158161 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158594 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158623 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158859 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.158944 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.159113 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.159973 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.160293 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.160430 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.160856 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.161156 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.161455 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.165398 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.165477 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:58.665460668 +0000 UTC m=+23.373276374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.165818 4858 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.165892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.166005 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.166052 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:58.666037365 +0000 UTC m=+23.373853071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.170591 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.173139 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.173251 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.173504 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.175057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.175725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.175817 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.175980 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.177094 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.180315 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.180331 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.180379 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.180397 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.180465 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:58.680444547 +0000 UTC m=+23.388260253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.180566 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.180820 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.181711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.182196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.184067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.184898 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.184912 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.184922 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.184959 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:58.684946479 +0000 UTC m=+23.392762195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.190982 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.191016 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.191232 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.192609 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.192625 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.192716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.192929 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.192983 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.193151 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.193326 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.193693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.196081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.196562 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.196664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.196841 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.196862 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.202396 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.209031 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.209531 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.211296 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" exitCode=255 Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.211431 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69"} Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.211536 4858 scope.go:117] "RemoveContainer" containerID="2245bf54afe907ffc08914c2a3459bdc0bbe12c526ee2c1e57d20b852a98b645" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.212468 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.217723 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.223179 4858 scope.go:117] "RemoveContainer" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.223989 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.224009 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.225825 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.241244 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.255352 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259281 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259291 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259300 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259308 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259316 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259325 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259333 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259342 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259350 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259486 4858 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259496 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259505 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259514 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259523 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259533 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259541 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259583 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259591 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259599 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259607 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259616 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259625 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259633 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259642 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259653 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259663 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259673 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259687 4858 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259697 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259709 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259718 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259726 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259735 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259743 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259753 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259761 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259770 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259779 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259788 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259796 4858 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259805 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259814 4858 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259822 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259831 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259838 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259847 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259855 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259863 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259872 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259880 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259889 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259898 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259907 4858 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259917 4858 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259925 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259934 4858 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259942 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259950 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259958 4858 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259966 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259975 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259983 4858 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259991 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.259999 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260007 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260016 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260025 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260033 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260041 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260049 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260058 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260091 4858 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260101 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260113 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260126 4858 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260136 4858 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260146 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260156 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260168 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260178 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260189 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260201 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260210 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260218 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260227 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260238 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260248 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260259 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260269 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260282 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260293 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260302 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260309 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260318 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260326 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260334 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260341 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260353 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260371 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260387 4858 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260405 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260422 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260437 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260448 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260460 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260469 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260478 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260486 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260561 4858 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260572 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260581 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260590 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260599 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260608 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260616 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260624 4858 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260634 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260643 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260651 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260660 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260668 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260676 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260685 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260694 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260702 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260711 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260719 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260727 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260735 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260743 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260752 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260760 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260772 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260780 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260789 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260797 4858 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260805 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260813 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260822 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260830 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260838 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260847 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260856 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260864 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260872 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260881 4858 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260889 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260898 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260905 4858 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260913 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260921 4858 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260930 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260938 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260946 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260955 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260963 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260972 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260981 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260990 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.260998 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261006 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261014 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261022 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261031 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261040 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261048 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261056 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261066 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261074 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261082 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261090 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261098 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261106 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261114 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261122 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261129 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261137 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261145 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261152 4858 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261160 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261168 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261176 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261184 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261191 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261200 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261208 4858 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.261219 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.267230 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.280605 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.291922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.327097 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.331895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.339508 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.344633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 20:07:58 crc kubenswrapper[4858]: W0127 20:07:58.364664 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-79c3ded6947536860e5ab3df5f71ea4be2c2d47385eb581fa190015ca2c7b31a WatchSource:0}: Error finding container 79c3ded6947536860e5ab3df5f71ea4be2c2d47385eb581fa190015ca2c7b31a: Status 404 returned error can't find the container with id 79c3ded6947536860e5ab3df5f71ea4be2c2d47385eb581fa190015ca2c7b31a Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.631513 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.666765 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.666893 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:07:59.666868462 +0000 UTC m=+24.374684168 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.667233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.667344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.667351 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.667379 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.667576 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:59.667542922 +0000 UTC m=+24.375358678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.667742 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:59.667730347 +0000 UTC m=+24.375546053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.768660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:07:58 crc kubenswrapper[4858]: I0127 20:07:58.768723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.768872 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.768902 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.768873 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.768917 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.768926 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.768940 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.768982 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:59.768962742 +0000 UTC m=+24.476778448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:58 crc kubenswrapper[4858]: E0127 20:07:58.769000 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:07:59.768991422 +0000 UTC m=+24.476807128 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.033764 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:41:43.043842119 +0000 UTC Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.070358 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.070487 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.215840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60"} Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.215900 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5"} Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.215920 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"79c3ded6947536860e5ab3df5f71ea4be2c2d47385eb581fa190015ca2c7b31a"} Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.217061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0c1d4a7bf3494dd250df55758cf4244e68f61713961d4a2e0e7bcd4fa2829d20"} Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.218736 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.221792 4858 scope.go:117] "RemoveContainer" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.221931 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.223641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939"} Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.223685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8adae3b8bafad92b1aa37dca2daf76a112fef951d16fe75b6e5bdf54282e48ff"} Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.232255 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.243725 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.255489 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245bf54afe907ffc08914c2a3459bdc0bbe12c526ee2c1e57d20b852a98b645\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:43Z\\\",\\\"message\\\":\\\"W0127 20:07:43.078609 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 20:07:43.078936 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769544463 cert, and key in /tmp/serving-cert-926854124/serving-signer.crt, /tmp/serving-cert-926854124/serving-signer.key\\\\nI0127 20:07:43.485249 1 observer_polling.go:159] Starting file observer\\\\nW0127 20:07:43.492447 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:43.492707 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:43.493684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-926854124/tls.crt::/tmp/serving-cert-926854124/tls.key\\\\\\\"\\\\nF0127 20:07:43.817906 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.274261 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.291314 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.303013 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.314082 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.323940 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.335601 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.347231 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.360130 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.372226 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.384651 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.397090 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.410100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.428612 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:07:59Z is after 2025-08-24T17:21:41Z" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.677211 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.677284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.677318 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.677418 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.677439 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.677459 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:08:01.677431116 +0000 UTC m=+26.385246822 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.677494 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:01.677487488 +0000 UTC m=+26.385303194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.677518 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:01.677499688 +0000 UTC m=+26.385315394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.777880 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:07:59 crc kubenswrapper[4858]: I0127 20:07:59.777946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778041 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778064 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778076 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778112 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778150 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778163 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778128 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:01.778113344 +0000 UTC m=+26.485929050 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:07:59 crc kubenswrapper[4858]: E0127 20:07:59.778225 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:01.778209377 +0000 UTC m=+26.486025083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.034077 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:30:15.256376622 +0000 UTC Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.069766 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.076059 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.077223 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.079788 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.081224 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.083947 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.085241 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.086539 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.087781 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.089088 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.090529 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.092732 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.093837 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.096429 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.097617 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.098781 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.100712 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.102064 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.104092 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.104950 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.106107 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.108307 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.109415 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.111468 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.111527 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.112418 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.114991 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.115897 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.117200 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.118955 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.119877 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.121406 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.122068 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.123334 4858 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.123459 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.125147 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.126043 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.126269 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.126706 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.128333 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.129077 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.129992 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.130635 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.131677 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.132124 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.133108 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.133746 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.134715 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.135204 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.136126 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.136608 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.137716 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.138186 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.139006 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.139496 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.140460 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.141077 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.141540 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.142362 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.142437 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.142503 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.142509 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:00 crc kubenswrapper[4858]: E0127 20:08:00.142675 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:00 crc kubenswrapper[4858]: E0127 20:08:00.142766 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.144579 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.159668 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.174003 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.184665 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.200183 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.215988 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.225476 4858 scope.go:117] "RemoveContainer" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" Jan 27 20:08:00 crc kubenswrapper[4858]: E0127 20:08:00.225726 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.230786 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.243255 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.254575 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.265669 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.278648 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.292736 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.312694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:00 crc kubenswrapper[4858]: I0127 20:08:00.325221 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:00Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.034376 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:55:01.910955146 +0000 UTC Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.070973 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.071132 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.230152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7"} Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.248029 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.279977 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.295336 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.309280 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.321702 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.333292 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.344975 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.357283 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.371706 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:01Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.694579 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.694740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.694837 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:08:05.694802233 +0000 UTC m=+30.402617999 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.694883 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.694928 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.694990 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.695024 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:05.695006689 +0000 UTC m=+30.402822395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.695058 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:05.69503899 +0000 UTC m=+30.402854746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.795828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:01 crc kubenswrapper[4858]: I0127 20:08:01.795904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796068 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796094 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796112 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796122 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796168 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796189 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:05.796167942 +0000 UTC m=+30.503983678 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796190 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:01 crc kubenswrapper[4858]: E0127 20:08:01.796255 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:05.796233024 +0000 UTC m=+30.504048770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:02 crc kubenswrapper[4858]: I0127 20:08:02.035079 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:55:16.40579876 +0000 UTC Jan 27 20:08:02 crc kubenswrapper[4858]: I0127 20:08:02.070651 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:02 crc kubenswrapper[4858]: I0127 20:08:02.070727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:02 crc kubenswrapper[4858]: E0127 20:08:02.070793 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:02 crc kubenswrapper[4858]: E0127 20:08:02.070912 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:03 crc kubenswrapper[4858]: I0127 20:08:03.035705 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:13:13.396942947 +0000 UTC Jan 27 20:08:03 crc kubenswrapper[4858]: I0127 20:08:03.070071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:03 crc kubenswrapper[4858]: E0127 20:08:03.070214 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:03 crc kubenswrapper[4858]: I0127 20:08:03.772165 4858 csr.go:261] certificate signing request csr-ghbsr is approved, waiting to be issued Jan 27 20:08:03 crc kubenswrapper[4858]: I0127 20:08:03.806191 4858 csr.go:257] certificate signing request csr-ghbsr is issued Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.037114 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:13:57.86667848 +0000 UTC Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.070471 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.070540 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.070652 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.070806 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.190818 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-psxnq"] Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.191172 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.191437 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-d2vhz"] Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.192037 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9d7sv"] Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.192180 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.192232 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.193937 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.194540 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.194513 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.195280 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.195678 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.195707 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.195814 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.196532 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.196891 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.196995 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.197009 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.198130 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.202232 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-855m5"] Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.202398 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.202579 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.204298 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.204528 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.216289 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.232985 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.246609 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.266043 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.313940 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-cni-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-os-release\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-cnibin\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4vf\" (UniqueName: \"kubernetes.io/projected/50837e4c-bd24-4b62-b1e7-b586e702bd40-kube-api-access-lp4vf\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-cnibin\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315349 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0fea6600-49c2-4130-a506-6046f0f7760d-cni-binary-copy\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-netns\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-system-cni-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315470 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-os-release\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-cni-bin\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0fea6600-49c2-4130-a506-6046f0f7760d-multus-daemon-config\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315577 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50837e4c-bd24-4b62-b1e7-b586e702bd40-mcd-auth-proxy-config\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/02269db9-8212-4591-aa62-f135bf69231c-hosts-file\") pod \"node-resolver-9d7sv\" (UID: \"02269db9-8212-4591-aa62-f135bf69231c\") " pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-k8s-cni-cncf-io\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50837e4c-bd24-4b62-b1e7-b586e702bd40-proxy-tls\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315691 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-multus-certs\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqswc\" (UniqueName: \"kubernetes.io/projected/02269db9-8212-4591-aa62-f135bf69231c-kube-api-access-jqswc\") pod \"node-resolver-9d7sv\" (UID: \"02269db9-8212-4591-aa62-f135bf69231c\") " pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-cni-multus\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1fe084c8-3445-4507-b00f-8c8e6d101426-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp25d\" (UniqueName: \"kubernetes.io/projected/1fe084c8-3445-4507-b00f-8c8e6d101426-kube-api-access-jp25d\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50837e4c-bd24-4b62-b1e7-b586e702bd40-rootfs\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-etc-kubernetes\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315850 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-hostroot\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315866 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-kubelet\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-system-cni-dir\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315894 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1fe084c8-3445-4507-b00f-8c8e6d101426-cni-binary-copy\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315910 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7sr7\" (UniqueName: \"kubernetes.io/projected/0fea6600-49c2-4130-a506-6046f0f7760d-kube-api-access-r7sr7\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.315982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-conf-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.316028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-socket-dir-parent\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.339458 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.363511 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.401213 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.416669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50837e4c-bd24-4b62-b1e7-b586e702bd40-mcd-auth-proxy-config\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50837e4c-bd24-4b62-b1e7-b586e702bd40-mcd-auth-proxy-config\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/02269db9-8212-4591-aa62-f135bf69231c-hosts-file\") pod \"node-resolver-9d7sv\" (UID: \"02269db9-8212-4591-aa62-f135bf69231c\") " pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-k8s-cni-cncf-io\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50837e4c-bd24-4b62-b1e7-b586e702bd40-proxy-tls\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-multus-certs\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqswc\" (UniqueName: \"kubernetes.io/projected/02269db9-8212-4591-aa62-f135bf69231c-kube-api-access-jqswc\") pod \"node-resolver-9d7sv\" (UID: \"02269db9-8212-4591-aa62-f135bf69231c\") " pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-cni-multus\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1fe084c8-3445-4507-b00f-8c8e6d101426-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp25d\" (UniqueName: \"kubernetes.io/projected/1fe084c8-3445-4507-b00f-8c8e6d101426-kube-api-access-jp25d\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417493 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50837e4c-bd24-4b62-b1e7-b586e702bd40-rootfs\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-etc-kubernetes\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417521 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-hostroot\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-kubelet\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-system-cni-dir\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1fe084c8-3445-4507-b00f-8c8e6d101426-cni-binary-copy\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417598 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7sr7\" (UniqueName: \"kubernetes.io/projected/0fea6600-49c2-4130-a506-6046f0f7760d-kube-api-access-r7sr7\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-conf-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417655 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-socket-dir-parent\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417677 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-cni-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-os-release\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-netns\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-cnibin\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4vf\" (UniqueName: \"kubernetes.io/projected/50837e4c-bd24-4b62-b1e7-b586e702bd40-kube-api-access-lp4vf\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417754 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-cnibin\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0fea6600-49c2-4130-a506-6046f0f7760d-cni-binary-copy\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-system-cni-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-os-release\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-cni-bin\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.417872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0fea6600-49c2-4130-a506-6046f0f7760d-multus-daemon-config\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.418359 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0fea6600-49c2-4130-a506-6046f0f7760d-multus-daemon-config\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.418408 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/02269db9-8212-4591-aa62-f135bf69231c-hosts-file\") pod \"node-resolver-9d7sv\" (UID: \"02269db9-8212-4591-aa62-f135bf69231c\") " pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.418433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-k8s-cni-cncf-io\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.418850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-multus-certs\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.419111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-hostroot\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.419112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-conf-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.419149 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-cni-multus\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.419177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-kubelet\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.419242 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-cnibin\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-system-cni-dir\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420523 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-var-lib-cni-bin\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-system-cni-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-cnibin\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-host-run-netns\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-os-release\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-os-release\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/50837e4c-bd24-4b62-b1e7-b586e702bd40-rootfs\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-socket-dir-parent\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420829 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-etc-kubernetes\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1fe084c8-3445-4507-b00f-8c8e6d101426-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.420904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fea6600-49c2-4130-a506-6046f0f7760d-multus-cni-dir\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.421265 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0fea6600-49c2-4130-a506-6046f0f7760d-cni-binary-copy\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.422518 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1fe084c8-3445-4507-b00f-8c8e6d101426-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.423038 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1fe084c8-3445-4507-b00f-8c8e6d101426-cni-binary-copy\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.430906 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50837e4c-bd24-4b62-b1e7-b586e702bd40-proxy-tls\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.432759 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.441204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4vf\" (UniqueName: \"kubernetes.io/projected/50837e4c-bd24-4b62-b1e7-b586e702bd40-kube-api-access-lp4vf\") pod \"machine-config-daemon-psxnq\" (UID: \"50837e4c-bd24-4b62-b1e7-b586e702bd40\") " pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.444079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqswc\" (UniqueName: \"kubernetes.io/projected/02269db9-8212-4591-aa62-f135bf69231c-kube-api-access-jqswc\") pod \"node-resolver-9d7sv\" (UID: \"02269db9-8212-4591-aa62-f135bf69231c\") " pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.445467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7sr7\" (UniqueName: \"kubernetes.io/projected/0fea6600-49c2-4130-a506-6046f0f7760d-kube-api-access-r7sr7\") pod \"multus-855m5\" (UID: \"0fea6600-49c2-4130-a506-6046f0f7760d\") " pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.454992 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.459282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp25d\" (UniqueName: \"kubernetes.io/projected/1fe084c8-3445-4507-b00f-8c8e6d101426-kube-api-access-jp25d\") pod \"multus-additional-cni-plugins-d2vhz\" (UID: \"1fe084c8-3445-4507-b00f-8c8e6d101426\") " pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.498953 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.506532 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.513087 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.518771 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9d7sv" Jan 27 20:08:04 crc kubenswrapper[4858]: W0127 20:08:04.519795 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50837e4c_bd24_4b62_b1e7_b586e702bd40.slice/crio-3bd88b600bb614a50b5335a739206bc1f9c74d4642b91e6a9d79d0f4d6a905f9 WatchSource:0}: Error finding container 3bd88b600bb614a50b5335a739206bc1f9c74d4642b91e6a9d79d0f4d6a905f9: Status 404 returned error can't find the container with id 3bd88b600bb614a50b5335a739206bc1f9c74d4642b91e6a9d79d0f4d6a905f9 Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.520620 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.524310 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-855m5" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.534688 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.550248 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.563246 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: W0127 20:08:04.565122 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fea6600_49c2_4130_a506_6046f0f7760d.slice/crio-6944f05b58670ece5c35aabaad25a325a8fc4d53313accbf2b59affda3f7d676 WatchSource:0}: Error finding container 6944f05b58670ece5c35aabaad25a325a8fc4d53313accbf2b59affda3f7d676: Status 404 returned error can't find the container with id 6944f05b58670ece5c35aabaad25a325a8fc4d53313accbf2b59affda3f7d676 Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.580194 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.590587 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rsk7j"] Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.591526 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.597095 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.597164 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.597327 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.597373 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.597474 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.597888 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.598033 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.598103 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.616972 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.634155 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.649746 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.661104 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.671423 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.683462 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.696808 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.707774 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.709989 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.711508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.711560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.711569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.711652 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.717453 4858 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.717753 4858 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.718806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.718828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.718836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.718850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.718860 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:04Z","lastTransitionTime":"2026-01-27T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.721282 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-ovn-kubernetes\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-config\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cda3ac1-7db7-4215-a301-b757743bff59-ovn-node-metrics-cert\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-log-socket\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722746 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-var-lib-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-node-log\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24pj\" (UniqueName: \"kubernetes.io/projected/5cda3ac1-7db7-4215-a301-b757743bff59-kube-api-access-p24pj\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722838 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-ovn\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-etc-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-netns\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-kubelet\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.722970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-netd\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.723002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-slash\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.723039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-systemd\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.723058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-bin\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.723098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-script-lib\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.723127 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-systemd-units\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.723150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-env-overrides\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.732139 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.737128 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.743514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.743574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.743585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.743601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.743612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:04Z","lastTransitionTime":"2026-01-27T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.754251 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.755634 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.761372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.761400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.761408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.761421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.761429 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:04Z","lastTransitionTime":"2026-01-27T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.777749 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.782419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.782463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.782475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.782493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.782506 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:04Z","lastTransitionTime":"2026-01-27T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.782965 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.801728 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.804181 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.804510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.804534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.804543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.804579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.804589 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:04Z","lastTransitionTime":"2026-01-27T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.807217 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 20:03:03 +0000 UTC, rotation deadline is 2026-12-16 10:09:37.807152435 +0000 UTC Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.807274 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7742h1m32.999879954s for next certificate rotation Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.817635 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: E0127 20:08:04.817753 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.819315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.819342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.819352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.819366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.819376 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:04Z","lastTransitionTime":"2026-01-27T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.820233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-log-socket\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-var-lib-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823668 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-node-log\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823690 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24pj\" (UniqueName: \"kubernetes.io/projected/5cda3ac1-7db7-4215-a301-b757743bff59-kube-api-access-p24pj\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-log-socket\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-ovn\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-ovn\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-etc-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-etc-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823770 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-var-lib-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-node-log\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823818 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823854 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823877 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-openvswitch\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-netns\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-kubelet\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-netd\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823931 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-netns\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-slash\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823973 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-bin\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823988 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-script-lib\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-kubelet\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-systemd\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.823998 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-netd\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824022 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-bin\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-systemd-units\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-env-overrides\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824058 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-slash\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824063 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-systemd\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-ovn-kubernetes\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-systemd-units\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824108 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-ovn-kubernetes\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-config\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cda3ac1-7db7-4215-a301-b757743bff59-ovn-node-metrics-cert\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-env-overrides\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824882 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-config\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.824934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-script-lib\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.829046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cda3ac1-7db7-4215-a301-b757743bff59-ovn-node-metrics-cert\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.834962 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.844153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24pj\" (UniqueName: \"kubernetes.io/projected/5cda3ac1-7db7-4215-a301-b757743bff59-kube-api-access-p24pj\") pod \"ovnkube-node-rsk7j\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.855217 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.868772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.880193 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.894150 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.905803 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:04Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.908000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:04 crc kubenswrapper[4858]: W0127 20:08:04.919706 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cda3ac1_7db7_4215_a301_b757743bff59.slice/crio-07749768e0b8ef654ec84d90c12f1cc30c42b84087e485d48d2ba9bab3abf3a2 WatchSource:0}: Error finding container 07749768e0b8ef654ec84d90c12f1cc30c42b84087e485d48d2ba9bab3abf3a2: Status 404 returned error can't find the container with id 07749768e0b8ef654ec84d90c12f1cc30c42b84087e485d48d2ba9bab3abf3a2 Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.921099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.921134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.921144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.921158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:04 crc kubenswrapper[4858]: I0127 20:08:04.921169 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:04Z","lastTransitionTime":"2026-01-27T20:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.023121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.023157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.023168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.023182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.023190 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.037710 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:58:24.630344686 +0000 UTC Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.070037 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.070169 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.126476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.126518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.126529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.126561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.126575 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.228306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.228344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.228362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.228378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.228389 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.240869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerStarted","Data":"4e4e861375d350a01278ea27af63f0082bf793f9af2f28447a5aee47acea7d3d"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.242143 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.242169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"3bd88b600bb614a50b5335a739206bc1f9c74d4642b91e6a9d79d0f4d6a905f9"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.243155 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"07749768e0b8ef654ec84d90c12f1cc30c42b84087e485d48d2ba9bab3abf3a2"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.244182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9d7sv" event={"ID":"02269db9-8212-4591-aa62-f135bf69231c","Type":"ContainerStarted","Data":"0b38397e5e722ae32fbac7958cb1f7f97558d7b75421af2d0b58b6d2c7a4539c"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.245251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerStarted","Data":"e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.245278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerStarted","Data":"6944f05b58670ece5c35aabaad25a325a8fc4d53313accbf2b59affda3f7d676"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.330609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.330641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.330649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.330661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.330670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.433255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.433288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.433296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.433310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.433320 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.534827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.534856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.534866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.534879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.534887 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.637079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.637336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.637434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.637563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.637657 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.731477 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.731638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.731687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.731814 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.731884 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:13.731858257 +0000 UTC m=+38.439673963 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.731937 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.732050 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:13.732032102 +0000 UTC m=+38.439847808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.732182 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:08:13.732167646 +0000 UTC m=+38.439983382 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.739818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.739854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.739867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.739882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.739890 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.812372 4858 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.832145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.832211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832332 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832333 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832353 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832367 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832372 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832382 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832432 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:13.832413902 +0000 UTC m=+38.540229618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:05 crc kubenswrapper[4858]: E0127 20:08:05.832452 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:13.832441873 +0000 UTC m=+38.540257589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.842311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.842493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.842596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.842685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.842759 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.945530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.945583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.945595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.945610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:05 crc kubenswrapper[4858]: I0127 20:08:05.945620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:05Z","lastTransitionTime":"2026-01-27T20:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.038292 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:12:44.659706514 +0000 UTC Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.048499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.048532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.048542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.048579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.048589 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.070104 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.070153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:06 crc kubenswrapper[4858]: E0127 20:08:06.070235 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:06 crc kubenswrapper[4858]: E0127 20:08:06.070322 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.090136 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.102293 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.114772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.126622 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.141310 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.151499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.151609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.151624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.151663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.151677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.168870 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.183353 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.195178 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.210487 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.220841 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.239122 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.249497 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6" exitCode=0 Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.249574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.250766 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9d7sv" event={"ID":"02269db9-8212-4591-aa62-f135bf69231c","Type":"ContainerStarted","Data":"3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.252431 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.255000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.255147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.255234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.255327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.255412 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.255636 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.257744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerStarted","Data":"f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.265414 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.280718 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.295897 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.307339 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.321891 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.332707 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.353177 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.357803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.357859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.357871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.357894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.357909 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.372123 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.383411 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.394862 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.407183 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.421154 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.436847 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.448306 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.460132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.460176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.460189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.460208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.460219 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.461796 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.472430 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.563041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.563076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.563087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.563102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.563111 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.665037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.665072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.665082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.665097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.665107 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.767457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.767494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.767506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.767523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.767536 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.800423 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lqbtf"] Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.800875 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.802512 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.802747 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.802834 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.803228 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.825172 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.838750 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.845366 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-host\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.845428 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-serviceca\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.845463 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzk5\" (UniqueName: \"kubernetes.io/projected/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-kube-api-access-5dzk5\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.849241 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.864344 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.870247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.870281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.870290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.870304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.870313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.875030 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.905838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.928402 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.946024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dzk5\" (UniqueName: \"kubernetes.io/projected/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-kube-api-access-5dzk5\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.946069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-host\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.946108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-serviceca\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.946394 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-host\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.947199 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.952795 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-serviceca\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.958093 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.966328 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dzk5\" (UniqueName: \"kubernetes.io/projected/ef638e59-7a7d-44a7-b6ae-f8b87b52fc68-kube-api-access-5dzk5\") pod \"node-ca-lqbtf\" (UID: \"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\") " pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.970993 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.972293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.972330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.972338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.972352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.972361 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:06Z","lastTransitionTime":"2026-01-27T20:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:06 crc kubenswrapper[4858]: I0127 20:08:06.983042 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.001388 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.013027 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.028945 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.039792 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:38:00.944633146 +0000 UTC Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.042615 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.069983 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:07 crc kubenswrapper[4858]: E0127 20:08:07.070353 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.074409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.074441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.074449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.074464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.074473 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.113042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lqbtf" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.178249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.178312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.178327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.178352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.178370 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.266312 4858 generic.go:334] "Generic (PLEG): container finished" podID="1fe084c8-3445-4507-b00f-8c8e6d101426" containerID="f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb" exitCode=0 Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.266412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerDied","Data":"f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.269303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lqbtf" event={"ID":"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68","Type":"ContainerStarted","Data":"0b73294f797b4f062d8d11d0e781af788230a877061fda084f33f8e6e3b3ffaf"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.274515 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.274604 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.274628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.282086 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.283220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.283254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.283268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.283288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.283302 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.296243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.316265 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.336919 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.349845 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.362454 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.375374 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.387474 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.388855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.388901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.388913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.388930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.388939 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.401371 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.412504 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.424899 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.439108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.456314 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.469191 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.482863 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:07Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.491608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.491763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.491838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.491945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.492029 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.594528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.594952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.594968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.594988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.595006 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.698387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.698435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.698444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.698460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.698470 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.801217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.801270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.801285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.801299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.801309 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.903516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.903622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.903643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.903664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:07 crc kubenswrapper[4858]: I0127 20:08:07.903682 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:07Z","lastTransitionTime":"2026-01-27T20:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.006687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.006728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.006739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.006756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.006767 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.040269 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:47:35.290109256 +0000 UTC Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.070040 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.070117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:08 crc kubenswrapper[4858]: E0127 20:08:08.070198 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:08 crc kubenswrapper[4858]: E0127 20:08:08.070461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.109127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.109190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.109202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.109242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.109256 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.212071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.212111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.212120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.212136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.212152 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.281503 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.281589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.283505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerStarted","Data":"a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.285930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lqbtf" event={"ID":"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68","Type":"ContainerStarted","Data":"0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.300891 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.314818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.314860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.314871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.314892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.314905 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.315036 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.326348 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.339273 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.352052 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.367775 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.380474 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.394123 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.409319 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.416927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.416962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.416972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.416987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.416996 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.419793 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.436151 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.455207 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.467940 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.482467 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.494672 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.507058 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.517538 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.520785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.520827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.520835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.520850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.520862 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.528909 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.540445 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.553183 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.565535 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.574729 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.589644 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.602415 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.619646 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.623299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.623331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.623341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.623356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.623369 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.640819 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.653297 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.665003 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.677557 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.688059 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:08Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.726723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.726762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.726772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.726787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.726798 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.829786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.829831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.829843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.829860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.829873 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.932834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.932872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.932881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.932895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:08 crc kubenswrapper[4858]: I0127 20:08:08.932906 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:08Z","lastTransitionTime":"2026-01-27T20:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.035695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.036043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.036153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.036239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.036340 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.041024 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:30:21.776582198 +0000 UTC Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.070203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:09 crc kubenswrapper[4858]: E0127 20:08:09.070428 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.139441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.139475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.139508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.139526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.139539 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.242176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.242222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.242239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.242261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.242278 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.293244 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.345146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.345194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.345208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.345225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.345237 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.447713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.447759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.447774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.447791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.447805 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.549690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.549733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.549742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.549758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.549768 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.651531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.651597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.651610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.651626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.651638 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.753675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.753720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.753731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.753745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.753756 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.856112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.856159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.856170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.856186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.856197 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.958876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.958919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.958931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.958948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:09 crc kubenswrapper[4858]: I0127 20:08:09.958961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:09Z","lastTransitionTime":"2026-01-27T20:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.042004 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:36:22.477676628 +0000 UTC Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.060809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.060853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.060863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.060881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.060894 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.070307 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.070355 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:10 crc kubenswrapper[4858]: E0127 20:08:10.070416 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:10 crc kubenswrapper[4858]: E0127 20:08:10.070611 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.163418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.163472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.163485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.163502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.163517 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.266083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.266127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.266142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.266161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.266179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.297623 4858 generic.go:334] "Generic (PLEG): container finished" podID="1fe084c8-3445-4507-b00f-8c8e6d101426" containerID="a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566" exitCode=0 Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.297691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerDied","Data":"a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.314450 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.326101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.341723 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.354654 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.368353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.368393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.368402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.368419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.368431 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.375165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.397033 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.414111 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.425761 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.436243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.448043 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.464534 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.470298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.470339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.470351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.470366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.470377 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.479720 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.492099 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.505661 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.515190 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.572456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.572488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.572497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.572510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.572518 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.675320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.675369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.675380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.675398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.675414 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.778331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.778369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.778380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.778394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.778405 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.881144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.881182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.881191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.881205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.881215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.985357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.985394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.985406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.985422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:10 crc kubenswrapper[4858]: I0127 20:08:10.985434 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:10Z","lastTransitionTime":"2026-01-27T20:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.042427 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:24:03.285018716 +0000 UTC Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.070263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:11 crc kubenswrapper[4858]: E0127 20:08:11.070384 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.088077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.088117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.088129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.088144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.088155 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.190925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.190971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.190980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.190996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.191007 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.293391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.293440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.293450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.293467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.293477 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.302768 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerStarted","Data":"1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.396889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.396943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.396953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.396968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.396978 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.499775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.499840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.499859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.499886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.499902 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.602161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.602237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.602262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.602292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.602317 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.705917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.706002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.706018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.706039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.706058 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.809541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.809608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.809621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.809833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.809846 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.912381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.912438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.912457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.912481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:11 crc kubenswrapper[4858]: I0127 20:08:11.912498 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:11Z","lastTransitionTime":"2026-01-27T20:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.015598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.015704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.015725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.015751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.015769 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.043140 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:31:38.88065601 +0000 UTC Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.070740 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.070819 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:12 crc kubenswrapper[4858]: E0127 20:08:12.070904 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:12 crc kubenswrapper[4858]: E0127 20:08:12.070985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.118340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.118405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.118422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.118444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.118456 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.221537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.221591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.221599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.221614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.221627 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.308117 4858 generic.go:334] "Generic (PLEG): container finished" podID="1fe084c8-3445-4507-b00f-8c8e6d101426" containerID="1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593" exitCode=0 Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.308195 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerDied","Data":"1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.318665 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.324008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.324038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.324047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.324066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.324085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.335068 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.348914 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.365846 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.378271 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.391953 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.409208 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.424935 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.427051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.427371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.427385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.427401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.427411 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.435741 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.452608 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.470071 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.481422 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.490720 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.501515 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.511733 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.522309 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.530411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.530455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.530465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.530482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.530491 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.632492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.632530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.632541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.632572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.632597 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.735019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.735116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.735138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.735156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.735167 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.837374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.837423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.837435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.837454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.837468 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.940057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.940100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.940109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.940123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:12 crc kubenswrapper[4858]: I0127 20:08:12.940134 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:12Z","lastTransitionTime":"2026-01-27T20:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.043330 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:33:28.399326915 +0000 UTC Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.043413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.043454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.043467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.043486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.043499 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.070239 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.070433 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.146974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.147038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.147057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.147081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.147125 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.250023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.250093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.250119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.250153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.250177 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.327214 4858 generic.go:334] "Generic (PLEG): container finished" podID="1fe084c8-3445-4507-b00f-8c8e6d101426" containerID="f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157" exitCode=0 Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.327270 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerDied","Data":"f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.349102 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.353336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.353387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.353405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.353429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.353448 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.365813 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.383115 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.403356 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.420169 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.435351 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.447215 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.455171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.455208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.455215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.455229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.455238 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.466058 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.479945 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.496973 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.517160 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.530032 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.540463 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.557418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.557453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.557462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.557475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.557484 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.559908 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.569155 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:13Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.659509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.659623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.659662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.659676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.659685 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.761854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.761890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.761897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.761912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.761921 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.816506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.816596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.816633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.816720 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:08:29.816689708 +0000 UTC m=+54.524505404 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.816729 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.816751 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.816781 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:29.816774161 +0000 UTC m=+54.524589867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.816813 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:29.816788421 +0000 UTC m=+54.524604127 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.864537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.864589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.864600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.864614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.864625 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.917739 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.917828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.917943 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.917972 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.917983 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.918000 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.918030 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.918044 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.918049 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:29.918024296 +0000 UTC m=+54.625839992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:13 crc kubenswrapper[4858]: E0127 20:08:13.918083 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:29.918070457 +0000 UTC m=+54.625886183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.966978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.967006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.967014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.967026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:13 crc kubenswrapper[4858]: I0127 20:08:13.967036 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:13Z","lastTransitionTime":"2026-01-27T20:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.044256 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:51:17.059029408 +0000 UTC Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.069271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.069307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.069318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.069333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.069345 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.073215 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:14 crc kubenswrapper[4858]: E0127 20:08:14.073318 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.073648 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:14 crc kubenswrapper[4858]: E0127 20:08:14.073719 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.171747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.171853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.171865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.171880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.171891 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.274538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.274580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.274588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.274607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.274626 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.336104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerStarted","Data":"c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.349474 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.363617 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.376445 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.377448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.377501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.377513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.377532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.377544 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.387189 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.401714 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.415643 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.435457 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.451101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.465160 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.479359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.479411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.479428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.479453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.479466 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.490874 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.506478 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.527830 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.539533 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.550677 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.563866 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:14Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.581478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.581513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.581523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.581564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.581577 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.684257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.684294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.684302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.684316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.684326 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.787381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.787425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.787436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.787453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.787464 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.891018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.891052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.891065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.891087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.891100 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.993753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.993836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.993846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.993861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:14 crc kubenswrapper[4858]: I0127 20:08:14.993872 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:14Z","lastTransitionTime":"2026-01-27T20:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.044959 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 01:42:56.863294345 +0000 UTC Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.070226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:15 crc kubenswrapper[4858]: E0127 20:08:15.070371 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.083627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.083694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.083707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.083728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.083744 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: E0127 20:08:15.106131 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.110833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.110870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.110882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.110897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.110909 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: E0127 20:08:15.124075 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.127456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.127497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.127511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.127528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.127540 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: E0127 20:08:15.143369 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.147193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.147225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.147234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.147250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.147260 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: E0127 20:08:15.162323 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.165376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.165407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.165417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.165431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.165442 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: E0127 20:08:15.181386 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: E0127 20:08:15.181543 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.182892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.182959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.182974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.182992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.183006 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.285117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.285163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.285174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.285190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.285202 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.343774 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.344005 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.347401 4858 generic.go:334] "Generic (PLEG): container finished" podID="1fe084c8-3445-4507-b00f-8c8e6d101426" containerID="c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9" exitCode=0 Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.347429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerDied","Data":"c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.362098 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.375797 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.376919 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.391445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.391484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.391495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.391528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.391539 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.392981 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.404243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.419577 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.433195 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.451932 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.463949 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.476954 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.492652 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.494037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.494060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.494068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.494082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.494093 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.503336 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.522388 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.532912 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.543901 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.555922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.573596 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.585512 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.596320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.596361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.596325 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.596371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.596612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.596643 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.610162 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.619284 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.634296 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.645122 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.656366 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.667827 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.679791 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.691141 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.699345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.699389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.699398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.699417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.699426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.704403 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.715954 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.726589 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.739772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:15Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.802652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.802742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.802755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.802774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.802785 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.906023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.906100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.906125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.906155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:15 crc kubenswrapper[4858]: I0127 20:08:15.906181 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:15Z","lastTransitionTime":"2026-01-27T20:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.011535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.011694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.011736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.011775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.011996 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.045329 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:18:20.033654945 +0000 UTC Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.070927 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.071405 4858 scope.go:117] "RemoveContainer" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" Jan 27 20:08:16 crc kubenswrapper[4858]: E0127 20:08:16.071412 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.071715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:16 crc kubenswrapper[4858]: E0127 20:08:16.071877 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.086204 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.098929 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.114568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.114617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.114629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.114647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.114660 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.120581 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.138191 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.149764 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.163260 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.175406 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.184945 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.195773 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.207841 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.216159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.216197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.216208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.216224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.216236 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.218279 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.228684 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.243677 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.256251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.270158 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.318166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.318200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.318212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.318227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.318237 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.354714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerStarted","Data":"98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.354779 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.355176 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.375309 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.387059 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.398015 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.410875 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.419883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.419915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.419923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.419936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.419947 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.428227 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.443491 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.456052 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.472074 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.484951 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.499029 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.511290 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.521803 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.522789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.522822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.522830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.522846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.522856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.535825 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.547829 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.563615 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.581109 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:16Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.624680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.624724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.624734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.624753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.624766 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.726644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.726689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.726698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.726712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.726721 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.829888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.829935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.829949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.829966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.829977 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.932396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.932439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.932454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.932469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:16 crc kubenswrapper[4858]: I0127 20:08:16.932481 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:16Z","lastTransitionTime":"2026-01-27T20:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.034696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.034731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.034741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.034759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.034769 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.046249 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:16:00.940718041 +0000 UTC Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.070843 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:17 crc kubenswrapper[4858]: E0127 20:08:17.071005 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.137989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.138062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.138075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.138104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.138120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.156416 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn"] Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.156911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.159181 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.159644 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.169261 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.181991 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.194055 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.211626 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.229945 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.240952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.241014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.241029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.241053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.241070 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.243757 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.252130 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ccbad9b1-e4e8-484e-908d-1695372441e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.252160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9tbb\" (UniqueName: \"kubernetes.io/projected/ccbad9b1-e4e8-484e-908d-1695372441e8-kube-api-access-w9tbb\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.252181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ccbad9b1-e4e8-484e-908d-1695372441e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.252211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ccbad9b1-e4e8-484e-908d-1695372441e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.255840 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.274132 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.285973 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.299967 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.315429 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.329569 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.342095 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.343417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.343466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.343478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.343513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.343526 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.353888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ccbad9b1-e4e8-484e-908d-1695372441e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.353953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9tbb\" (UniqueName: \"kubernetes.io/projected/ccbad9b1-e4e8-484e-908d-1695372441e8-kube-api-access-w9tbb\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.353993 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ccbad9b1-e4e8-484e-908d-1695372441e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.354191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ccbad9b1-e4e8-484e-908d-1695372441e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.355215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ccbad9b1-e4e8-484e-908d-1695372441e8-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.355253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ccbad9b1-e4e8-484e-908d-1695372441e8-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.359403 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.359761 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ccbad9b1-e4e8-484e-908d-1695372441e8-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.365207 4858 generic.go:334] "Generic (PLEG): container finished" podID="1fe084c8-3445-4507-b00f-8c8e6d101426" containerID="98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8" exitCode=0 Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.365620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerDied","Data":"98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.368595 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.369921 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.370066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.370578 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.375081 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.375742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9tbb\" (UniqueName: \"kubernetes.io/projected/ccbad9b1-e4e8-484e-908d-1695372441e8-kube-api-access-w9tbb\") pod \"ovnkube-control-plane-749d76644c-wxhcn\" (UID: \"ccbad9b1-e4e8-484e-908d-1695372441e8\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.383962 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.393630 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.412821 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.436507 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.445927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.445969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.445986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.446009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.446025 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.454051 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.468772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.471183 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.488361 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.503186 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.514252 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.526806 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.545391 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.549450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.549488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.549504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.549525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.549541 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.561652 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.578542 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.594319 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.613099 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.626832 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.639530 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:17Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.654055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.654126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.654145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.654167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.654180 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.757370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.757431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.757443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.757465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.757477 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.859838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.859885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.859895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.859912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.859927 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.962986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.963037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.963047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.963066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:17 crc kubenswrapper[4858]: I0127 20:08:17.963077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:17Z","lastTransitionTime":"2026-01-27T20:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.047265 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 11:28:58.414269303 +0000 UTC Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.065521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.065634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.065646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.065670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.065689 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.070864 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.070981 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:18 crc kubenswrapper[4858]: E0127 20:08:18.071031 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:18 crc kubenswrapper[4858]: E0127 20:08:18.071150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.171183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.171301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.171377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.171447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.172221 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.254656 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-j5hlm"] Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.255037 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:18 crc kubenswrapper[4858]: E0127 20:08:18.255092 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.268863 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.274155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.274184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.274195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.274210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.274222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.281979 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.296489 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.309049 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.321254 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.331694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.339750 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.347976 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.359255 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.364164 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrdq\" (UniqueName: \"kubernetes.io/projected/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-kube-api-access-bvrdq\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.364209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.368852 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.372844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" event={"ID":"ccbad9b1-e4e8-484e-908d-1695372441e8","Type":"ContainerStarted","Data":"b092564aa4580433c9bc8abc353e924033426a01d2dbf7b52b19bead5ca5b3c8"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.372961 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.375776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.375807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.375817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.375832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.375841 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.377962 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.393200 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.409732 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.420897 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.429998 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.441735 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.450771 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:18Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.465271 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrdq\" (UniqueName: \"kubernetes.io/projected/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-kube-api-access-bvrdq\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.465352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:18 crc kubenswrapper[4858]: E0127 20:08:18.465534 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:18 crc kubenswrapper[4858]: E0127 20:08:18.465634 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:18.965609939 +0000 UTC m=+43.673425685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.477932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.477970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.477981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.477995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.478006 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.532573 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrdq\" (UniqueName: \"kubernetes.io/projected/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-kube-api-access-bvrdq\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.580260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.580301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.580311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.580325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.580338 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.682722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.682759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.682772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.682786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.682796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.784394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.784428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.784440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.784456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.784469 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.886677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.886723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.886735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.886752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.886764 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.970730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:18 crc kubenswrapper[4858]: E0127 20:08:18.970872 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:18 crc kubenswrapper[4858]: E0127 20:08:18.970959 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:19.970939967 +0000 UTC m=+44.678755733 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.989111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.989172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.989192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.989214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:18 crc kubenswrapper[4858]: I0127 20:08:18.989230 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:18Z","lastTransitionTime":"2026-01-27T20:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.048405 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:12:00.130094698 +0000 UTC Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.071073 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:19 crc kubenswrapper[4858]: E0127 20:08:19.071276 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.091333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.091369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.091379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.091393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.091404 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.192844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.192877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.192886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.192899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.192909 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.295837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.295902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.295914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.295927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.295938 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.380530 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" event={"ID":"ccbad9b1-e4e8-484e-908d-1695372441e8","Type":"ContainerStarted","Data":"d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.400360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.400405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.400416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.400431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.400442 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.503532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.503602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.503614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.503632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.503645 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.605926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.605981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.605995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.606012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.606024 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.708617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.708782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.708798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.708816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.708826 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.810919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.810965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.810976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.810992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.811002 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.913630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.913682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.913696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.913718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.913736 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:19Z","lastTransitionTime":"2026-01-27T20:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:19 crc kubenswrapper[4858]: I0127 20:08:19.979644 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:19 crc kubenswrapper[4858]: E0127 20:08:19.979847 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:19 crc kubenswrapper[4858]: E0127 20:08:19.980026 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:21.980001847 +0000 UTC m=+46.687817563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.016472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.016524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.016541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.016578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.016593 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.049082 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:35:20.61342225 +0000 UTC Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.070607 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.070627 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:20 crc kubenswrapper[4858]: E0127 20:08:20.071003 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:20 crc kubenswrapper[4858]: E0127 20:08:20.070999 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.070625 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:20 crc kubenswrapper[4858]: E0127 20:08:20.071098 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.119073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.119111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.119122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.119136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.119146 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.222758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.222800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.222812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.222831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.222844 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.325227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.325297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.325309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.325342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.325359 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.391310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" event={"ID":"1fe084c8-3445-4507-b00f-8c8e6d101426","Type":"ContainerStarted","Data":"22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.408847 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.424195 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.428634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.428678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.428687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.428704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.428715 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.441737 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.459705 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.476088 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.490033 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.503779 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.519295 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.532094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.532147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.532166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.532195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.532215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.546077 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.562310 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.581646 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.601529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.613953 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.625439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.635728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.635781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.635795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.635816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.635833 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.638218 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.648735 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.660978 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:20Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.744097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.744147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.744160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.744177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.744190 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.847205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.847248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.847256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.847271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.847282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.950199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.950225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.950233 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.950248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:20 crc kubenswrapper[4858]: I0127 20:08:20.950257 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:20Z","lastTransitionTime":"2026-01-27T20:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.049899 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:06:28.855488716 +0000 UTC Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.052364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.052400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.052409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.052424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.052434 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.070060 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:21 crc kubenswrapper[4858]: E0127 20:08:21.070318 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.154699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.154736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.154744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.154757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.154768 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.257948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.258012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.258023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.258039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.258051 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.360593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.360643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.360655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.360673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.360686 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.396450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" event={"ID":"ccbad9b1-e4e8-484e-908d-1695372441e8","Type":"ContainerStarted","Data":"9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.414607 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.425108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.436344 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.448363 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.460696 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.462406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.462448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.462458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.462472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.462484 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.472585 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.484723 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.497352 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.511358 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.522762 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.535122 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.549349 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.560955 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.564625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.564672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.564684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.564699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.564711 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.579394 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.597583 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.609518 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.619749 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:21Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.666488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.666532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.666574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.666591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.666604 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.769676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.769752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.769764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.769780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.769788 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.872320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.872367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.872376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.872390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.872404 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.975110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.975151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.975159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.975173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.975182 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:21Z","lastTransitionTime":"2026-01-27T20:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:21 crc kubenswrapper[4858]: I0127 20:08:21.999518 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:21 crc kubenswrapper[4858]: E0127 20:08:21.999671 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:21 crc kubenswrapper[4858]: E0127 20:08:21.999738 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:25.999720642 +0000 UTC m=+50.707536348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.050881 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:25:59.935215853 +0000 UTC Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.070426 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.070429 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.070643 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:22 crc kubenswrapper[4858]: E0127 20:08:22.070581 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:22 crc kubenswrapper[4858]: E0127 20:08:22.070775 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:22 crc kubenswrapper[4858]: E0127 20:08:22.070944 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.077202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.077276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.077287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.077301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.077311 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.179911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.179967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.179979 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.179998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.180010 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.282314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.282359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.282368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.282388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.282400 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.385015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.385045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.385052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.385071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.385080 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.400624 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/0.log" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.403808 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda" exitCode=1 Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.403872 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.404641 4858 scope.go:117] "RemoveContainer" containerID="da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.420420 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.438779 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0127 20:08:21.495779 6126 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:08:21.495816 6126 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:08:21.495839 6126 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:08:21.496359 6126 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 20:08:21.496390 6126 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 20:08:21.496401 6126 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 20:08:21.496418 6126 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 20:08:21.496425 6126 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 20:08:21.496437 6126 factory.go:656] Stopping watch factory\\\\nI0127 20:08:21.496447 6126 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:21.496463 6126 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 20:08:21.496464 6126 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 20:08:21.496474 6126 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 20:08:21.496477 6126 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.461798 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.477879 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.487646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.487688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.487699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.487715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.487725 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.494808 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.513642 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.527966 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.539611 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.551897 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.564652 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.577704 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.589876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.589918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.589928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.589945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.589955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.592853 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.605198 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.616279 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.630368 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.643738 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.654274 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:22Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.691705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.691756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.691770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.691793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.691808 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.794713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.794753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.794768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.794785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.794796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.897217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.897247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.897257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.897271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.897282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.999067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.999109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.999121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.999138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:22 crc kubenswrapper[4858]: I0127 20:08:22.999150 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:22Z","lastTransitionTime":"2026-01-27T20:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.051934 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:50:57.456196509 +0000 UTC Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.070532 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:23 crc kubenswrapper[4858]: E0127 20:08:23.070694 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.102068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.102101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.102113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.102129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.102140 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.204877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.204944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.204961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.204983 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.204996 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.307500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.307570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.307581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.307596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.307606 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.409951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.410004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.410016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.410037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.410049 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.410934 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/1.log" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.411592 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/0.log" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.414865 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112" exitCode=1 Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.414963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.415050 4858 scope.go:117] "RemoveContainer" containerID="da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.416113 4858 scope.go:117] "RemoveContainer" containerID="72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112" Jan 27 20:08:23 crc kubenswrapper[4858]: E0127 20:08:23.416538 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.435417 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.449635 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.464790 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.479108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.496275 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.513234 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.513868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.513942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.513956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.513985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.514001 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.528379 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.544204 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.564637 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.580855 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.600203 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.613403 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.616608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.616652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.616665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.616688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.616707 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.641759 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da7dba0544ba8fba859656b6cff4f86d4084e8746947037657604387ba6bdeda\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI0127 20:08:21.495779 6126 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:08:21.495816 6126 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:08:21.495839 6126 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:08:21.496359 6126 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 20:08:21.496390 6126 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 20:08:21.496401 6126 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 20:08:21.496418 6126 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 20:08:21.496425 6126 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 20:08:21.496437 6126 factory.go:656] Stopping watch factory\\\\nI0127 20:08:21.496447 6126 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:21.496463 6126 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 20:08:21.496464 6126 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 20:08:21.496474 6126 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 20:08:21.496477 6126 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.664862 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.684779 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.701838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.718446 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:23Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.720292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.720335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.720344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.720363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.720377 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.823761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.823807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.823817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.823831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.823844 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.927409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.927459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.927468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.927488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:23 crc kubenswrapper[4858]: I0127 20:08:23.927499 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:23Z","lastTransitionTime":"2026-01-27T20:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.030111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.030166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.030175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.030188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.030216 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.052306 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:35:42.635809873 +0000 UTC Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.070986 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.071014 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:24 crc kubenswrapper[4858]: E0127 20:08:24.071188 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.071252 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:24 crc kubenswrapper[4858]: E0127 20:08:24.071351 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:24 crc kubenswrapper[4858]: E0127 20:08:24.071446 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.132904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.132962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.132976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.132993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.133004 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.235994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.236041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.236052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.236069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.236401 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.333229 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.339110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.339162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.339174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.339221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.339236 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.420628 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/1.log" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.423900 4858 scope.go:117] "RemoveContainer" containerID="72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112" Jan 27 20:08:24 crc kubenswrapper[4858]: E0127 20:08:24.424071 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.442747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.442811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.442823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.442839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.442850 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.444481 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.458204 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.471477 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.482289 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.494523 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.515676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.531715 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.546029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.546073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.546084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.546110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.546124 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.546312 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.561599 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.587164 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.614524 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.631133 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.646374 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.649039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.649086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.649098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.649123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.649138 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.668193 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.681697 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.694850 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.709410 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:24Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.752042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.752094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.752109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.752132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.752147 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.854258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.854289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.854314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.854330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.854339 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.956862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.956934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.956946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.956960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:24 crc kubenswrapper[4858]: I0127 20:08:24.956969 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:24Z","lastTransitionTime":"2026-01-27T20:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.053175 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:00:12.636991284 +0000 UTC Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.059648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.059720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.059740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.059768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.059789 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.070214 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:25 crc kubenswrapper[4858]: E0127 20:08:25.070362 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.162467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.162522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.162532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.162576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.162586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.219008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.219063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.219075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.219095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.219108 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: E0127 20:08:25.235910 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:25Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.244665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.244724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.244741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.244774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.244797 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: E0127 20:08:25.265139 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:25Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.269276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.269337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.269351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.269374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.269387 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: E0127 20:08:25.287631 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:25Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.291515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.291593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.291605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.291622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.291670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: E0127 20:08:25.307012 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:25Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.311305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.311351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.311365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.311393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.311411 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: E0127 20:08:25.325821 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:25Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:25 crc kubenswrapper[4858]: E0127 20:08:25.325978 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.328337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.328409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.328430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.328455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.328479 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.431745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.431805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.431817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.431841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.431856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.534387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.534426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.534462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.534480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.534491 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.637456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.637496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.637509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.637530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.637541 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.740593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.740638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.740647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.740662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.740671 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.843813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.843899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.843920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.843941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.843955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.947802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.947861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.947881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.947903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:25 crc kubenswrapper[4858]: I0127 20:08:25.947956 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:25Z","lastTransitionTime":"2026-01-27T20:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.041200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:26 crc kubenswrapper[4858]: E0127 20:08:26.041351 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:26 crc kubenswrapper[4858]: E0127 20:08:26.041442 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:34.041419461 +0000 UTC m=+58.749235177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.050834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.050990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.051017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.051092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.051119 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.054149 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:21:26.17908478 +0000 UTC Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.070876 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.070913 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.070960 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:26 crc kubenswrapper[4858]: E0127 20:08:26.070993 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:26 crc kubenswrapper[4858]: E0127 20:08:26.071109 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:26 crc kubenswrapper[4858]: E0127 20:08:26.071237 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.086850 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.101451 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.114099 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.126010 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.145771 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.153393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.153435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.153452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.153471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.153483 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.168236 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.181112 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.191888 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.205774 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.219126 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.230000 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.245004 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.255183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.255236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.255251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.255269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.255282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.265156 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.280251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.293971 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.303906 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.313050 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:26Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.357291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.357342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.357351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.357364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.357372 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.460094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.460158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.460167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.460207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.460217 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.562516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.562565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.562578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.562594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.562606 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.666072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.666143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.666162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.666189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.666208 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.768949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.769000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.769012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.769030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.769043 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.871972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.872039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.872057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.872081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.872100 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.974286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.974342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.974358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.974378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:26 crc kubenswrapper[4858]: I0127 20:08:26.974393 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:26Z","lastTransitionTime":"2026-01-27T20:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.055192 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:44:45.618763464 +0000 UTC Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.070306 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:27 crc kubenswrapper[4858]: E0127 20:08:27.070424 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.076722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.076773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.076788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.076807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.076822 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.178872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.178913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.178924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.178940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.178952 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.281435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.281485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.281497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.281515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.281533 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.384134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.384195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.384208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.384228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.384240 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.485949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.486008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.486026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.486048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.486064 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.588503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.588565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.588576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.588592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.588603 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.690695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.690930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.691011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.691185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.691304 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.793516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.793578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.793592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.793609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.793620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.895974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.896277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.896352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.896425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.896501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.999284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.999347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.999357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.999373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:27 crc kubenswrapper[4858]: I0127 20:08:27.999384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:27Z","lastTransitionTime":"2026-01-27T20:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.056349 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:18:25.378764203 +0000 UTC Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.070881 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:28 crc kubenswrapper[4858]: E0127 20:08:28.071037 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.071091 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.071138 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:28 crc kubenswrapper[4858]: E0127 20:08:28.071174 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:28 crc kubenswrapper[4858]: E0127 20:08:28.071213 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.101932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.101992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.102001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.102016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.102026 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.204614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.204912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.205008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.205091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.205265 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.310072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.310126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.310162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.310203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.310224 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.326787 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.336161 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.339911 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.349639 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.362509 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.371392 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.388100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.405378 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.411777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.411822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.411834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.411849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.411857 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.416298 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.429940 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.443749 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.456302 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.469235 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.479293 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.490407 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.506617 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.514259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.514305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.514316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.514333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.514347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.521673 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.534089 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.544280 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:28Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.616741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.616779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.616787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.616803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.616814 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.719443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.719493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.719502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.719518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.719529 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.822894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.822929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.822937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.822950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.822960 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.926253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.926333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.926353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.926381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:28 crc kubenswrapper[4858]: I0127 20:08:28.926401 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:28Z","lastTransitionTime":"2026-01-27T20:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.029485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.029618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.029633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.029654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.029665 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.057169 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:05:25.192730208 +0000 UTC Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.070420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.070601 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.132526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.132604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.132617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.132644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.132670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.236198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.236275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.236300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.236334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.236362 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.339144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.339200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.339214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.339241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.339257 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.441348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.441388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.441398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.441415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.441426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.545421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.545509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.545538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.545624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.545651 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.649507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.649655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.649688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.649722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.649748 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.752091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.752142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.752158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.752179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.752196 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.854344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.854405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.854418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.854438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.854451 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.882193 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.882363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.882496 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:09:01.882465004 +0000 UTC m=+86.590280710 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.882609 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.882675 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:09:01.88266419 +0000 UTC m=+86.590480116 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.882740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.882892 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.882945 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:09:01.882937428 +0000 UTC m=+86.590753334 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.961874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.961934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.961944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.961962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.961976 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:29Z","lastTransitionTime":"2026-01-27T20:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.983833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:29 crc kubenswrapper[4858]: I0127 20:08:29.983929 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984101 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984125 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984141 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984169 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984204 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:09:01.984184503 +0000 UTC m=+86.692000199 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984217 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984243 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:29 crc kubenswrapper[4858]: E0127 20:08:29.984348 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:09:01.984318317 +0000 UTC m=+86.692134053 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.057773 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:03:22.072307424 +0000 UTC Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.065053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.065103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.065115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.065132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.065145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.070607 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.070619 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:30 crc kubenswrapper[4858]: E0127 20:08:30.070731 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.070749 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:30 crc kubenswrapper[4858]: E0127 20:08:30.070926 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:30 crc kubenswrapper[4858]: E0127 20:08:30.071085 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.167175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.167223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.167234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.167250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.167262 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.269992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.270028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.270037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.270051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.270059 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.372406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.372460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.372474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.372493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.372506 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.474346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.474391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.474399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.474412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.474421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.577463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.577522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.577591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.577618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.577635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.680266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.680308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.680316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.680520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.680529 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.783384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.783425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.783433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.783447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.783457 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.886256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.886309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.886324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.886342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.886354 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.989301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.989355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.989366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.989385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:30 crc kubenswrapper[4858]: I0127 20:08:30.989405 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:30Z","lastTransitionTime":"2026-01-27T20:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.058372 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:40:15.854635572 +0000 UTC Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.070775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:31 crc kubenswrapper[4858]: E0127 20:08:31.070893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.091695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.091766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.091777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.091795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.091813 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.194094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.194135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.194143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.194156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.194164 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.296356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.296406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.296417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.296431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.296444 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.398031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.398089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.398100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.398113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.398156 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.500515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.500569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.500581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.500597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.500608 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.602400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.602463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.602471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.602484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.602493 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.704607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.704654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.704663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.704677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.704687 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.807689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.807769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.807788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.807819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.807846 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.910436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.910474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.910485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.910501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:31 crc kubenswrapper[4858]: I0127 20:08:31.910512 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:31Z","lastTransitionTime":"2026-01-27T20:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.012746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.012787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.012798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.012813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.012823 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.059288 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:31:21.181811692 +0000 UTC Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.070581 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.070613 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:32 crc kubenswrapper[4858]: E0127 20:08:32.070704 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.070594 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:32 crc kubenswrapper[4858]: E0127 20:08:32.070787 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:32 crc kubenswrapper[4858]: E0127 20:08:32.070851 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.115122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.115160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.115202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.115217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.115229 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.217520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.217610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.217632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.217661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.217683 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.321283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.321332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.321353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.321370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.321382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.424759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.424859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.424884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.424914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.424937 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.528416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.528742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.528840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.528937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.529030 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.632179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.632261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.632277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.632298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.632314 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.734033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.734079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.734104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.734120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.734129 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.836587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.836634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.836645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.836661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.836673 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.938882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.938936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.938951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.938971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:32 crc kubenswrapper[4858]: I0127 20:08:32.938987 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:32Z","lastTransitionTime":"2026-01-27T20:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.042070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.042112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.042122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.042140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.042149 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.060025 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:18:41.971208808 +0000 UTC Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.070378 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:33 crc kubenswrapper[4858]: E0127 20:08:33.070543 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.144420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.144461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.144474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.144491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.144504 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.168780 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.181454 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.191309 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.202145 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.221748 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.235963 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.247067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.247107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.247117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.247133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.247144 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.248389 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.258052 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.268703 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.280741 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.290512 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.303077 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.311919 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.328978 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.347759 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.349051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.349085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.349096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.349113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.349125 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.358821 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.368822 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.380121 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.390464 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:33Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.451120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.451173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.451187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.451206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.451224 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.553569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.553606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.553618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.553634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.553645 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.655725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.655764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.655773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.655785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.655794 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.758064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.758106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.758114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.758130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.758139 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.860405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.860506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.860526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.860594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.860615 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.963368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.963479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.963518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.963540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:33 crc kubenswrapper[4858]: I0127 20:08:33.963586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:33Z","lastTransitionTime":"2026-01-27T20:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.061400 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:45:24.684503464 +0000 UTC Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.066147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.066174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.066184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.066198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.066209 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.070215 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.070245 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:34 crc kubenswrapper[4858]: E0127 20:08:34.070311 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:34 crc kubenswrapper[4858]: E0127 20:08:34.070390 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.070702 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:34 crc kubenswrapper[4858]: E0127 20:08:34.070761 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.124375 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:34 crc kubenswrapper[4858]: E0127 20:08:34.124572 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:34 crc kubenswrapper[4858]: E0127 20:08:34.124692 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:08:50.124672254 +0000 UTC m=+74.832487970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.168245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.168312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.168322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.168338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.168350 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.271304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.271397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.271423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.271458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.271483 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.374135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.374198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.374213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.374235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.374250 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.476979 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.477017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.477079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.477098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.477109 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.579690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.579753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.579766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.579781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.579790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.682393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.682428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.682436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.682450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.682459 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.785142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.785189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.785203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.785221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.785233 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.888049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.888094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.888107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.888124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:34 crc kubenswrapper[4858]: I0127 20:08:34.888135 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:34Z","lastTransitionTime":"2026-01-27T20:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.005675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.005732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.005741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.005755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.005765 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.062422 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:17:45.310235814 +0000 UTC Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.070717 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:35 crc kubenswrapper[4858]: E0127 20:08:35.070849 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.107380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.107427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.107439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.107454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.107465 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.210309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.210371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.210383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.210404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.210418 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.313102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.313141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.313150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.313164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.313174 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.415856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.415893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.415902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.415915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.415929 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.519133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.519189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.519199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.519218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.519228 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.622223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.622275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.622311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.622329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.622342 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.719948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.719987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.719997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.720013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.720023 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: E0127 20:08:35.731041 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:35Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.734207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.734237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.734247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.734259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.734269 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: E0127 20:08:35.749754 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:35Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.753374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.753411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.753421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.753438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.753448 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: E0127 20:08:35.764668 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:35Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.768253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.768293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.768303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.768318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.768328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: E0127 20:08:35.779759 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:35Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.783506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.783537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.783545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.783575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.783585 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: E0127 20:08:35.795129 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:35Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:35 crc kubenswrapper[4858]: E0127 20:08:35.795291 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.796839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.796882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.796893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.796907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.796917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.899374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.899413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.899421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.899436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:35 crc kubenswrapper[4858]: I0127 20:08:35.899446 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:35Z","lastTransitionTime":"2026-01-27T20:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.001333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.001366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.001374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.001389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.001399 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.062531 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:47:39.713902065 +0000 UTC Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.070278 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:36 crc kubenswrapper[4858]: E0127 20:08:36.070377 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.070475 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.070793 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:36 crc kubenswrapper[4858]: E0127 20:08:36.070907 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:36 crc kubenswrapper[4858]: E0127 20:08:36.071785 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.083736 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.101745 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.103417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.103446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.103455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.103470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.103481 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.113640 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.130942 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.150331 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.165298 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.177517 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.187997 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.202349 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.205881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.205911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.205942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.205955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.205964 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.212266 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.223245 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.233816 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.244491 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.256151 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.265886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.278252 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.289359 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.298986 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:36Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.307917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.307958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.307971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.307986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.307999 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.409762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.409802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.409813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.409827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.409837 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.511592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.511646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.511655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.511672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.511683 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.614616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.614654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.614663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.614676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.614686 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.717475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.717523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.717532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.717562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.717575 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.819926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.819986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.819995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.820007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.820017 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.922604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.922636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.922644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.922656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:36 crc kubenswrapper[4858]: I0127 20:08:36.922664 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:36Z","lastTransitionTime":"2026-01-27T20:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.025798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.025838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.025847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.025861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.025871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.063448 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:12:23.437927398 +0000 UTC Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.070757 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:37 crc kubenswrapper[4858]: E0127 20:08:37.070888 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.128419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.128461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.128472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.128489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.128500 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.231903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.231975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.232065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.232097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.232120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.334469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.334511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.334520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.334535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.334562 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.437421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.437523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.437572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.437594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.437609 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.540432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.540465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.540474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.540490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.540500 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.643728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.643801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.643818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.643842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.643861 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.746331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.746388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.746401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.746421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.746433 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.848840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.848895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.848911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.848929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.848941 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.951474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.951567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.951585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.951601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:37 crc kubenswrapper[4858]: I0127 20:08:37.951612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:37Z","lastTransitionTime":"2026-01-27T20:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.054068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.054751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.054785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.054811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.054827 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.064459 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:03:23.800209142 +0000 UTC Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.069908 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.070012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.070098 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:38 crc kubenswrapper[4858]: E0127 20:08:38.070232 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:38 crc kubenswrapper[4858]: E0127 20:08:38.070360 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:38 crc kubenswrapper[4858]: E0127 20:08:38.070656 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.070943 4858 scope.go:117] "RemoveContainer" containerID="72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.157500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.157889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.157900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.157916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.157925 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.261294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.261383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.261396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.261420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.261434 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.364877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.364949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.364961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.364981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.364994 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.467424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.467475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.467486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.467505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.467518 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.570462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.570500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.570510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.570530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.570541 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.673645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.673686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.673694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.673708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.673718 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.776034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.776080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.776091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.776108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.776117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.878900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.878948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.878962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.878980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.878991 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.981507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.981590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.981603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.981619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:38 crc kubenswrapper[4858]: I0127 20:08:38.981631 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:38Z","lastTransitionTime":"2026-01-27T20:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.065205 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:32:53.55644826 +0000 UTC Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.069997 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:39 crc kubenswrapper[4858]: E0127 20:08:39.070130 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.084578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.084612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.084622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.084637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.084647 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.186630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.186674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.186686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.186702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.186713 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.289435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.289480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.289491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.289505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.289517 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.391501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.391577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.391588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.391603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.391612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.472012 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/1.log" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.474011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.474810 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.491935 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.493691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.493820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.494158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.494408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.495257 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.504324 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.517790 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.526131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.536751 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.549446 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.565915 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.574989 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.590869 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.597348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.597387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.597400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.597416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.597426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.609947 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.621709 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.634006 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.646102 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.662993 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.674456 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.685468 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.695791 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.699314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.699367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.699379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.699397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.699412 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.706686 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:39Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.801707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.801740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.801748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.801760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.801773 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.904373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.904420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.904431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.904445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:39 crc kubenswrapper[4858]: I0127 20:08:39.904454 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:39Z","lastTransitionTime":"2026-01-27T20:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.006828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.006885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.006897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.006914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.006925 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.065479 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:04:34.649241651 +0000 UTC Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.070875 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.070931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.071000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:40 crc kubenswrapper[4858]: E0127 20:08:40.071010 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:40 crc kubenswrapper[4858]: E0127 20:08:40.071091 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:40 crc kubenswrapper[4858]: E0127 20:08:40.071136 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.109058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.109104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.109115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.109132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.109145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.214734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.214769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.214777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.214792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.214801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.317776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.317831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.317845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.317865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.317879 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.420210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.420256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.420268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.420287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.420299 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.478634 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/2.log" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.479327 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/1.log" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.482625 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e" exitCode=1 Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.482685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.482744 4858 scope.go:117] "RemoveContainer" containerID="72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.483428 4858 scope.go:117] "RemoveContainer" containerID="b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e" Jan 27 20:08:40 crc kubenswrapper[4858]: E0127 20:08:40.483590 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.498867 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.511519 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.522431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.522473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.522484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.522503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.522516 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.523944 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.538391 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.554962 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.566920 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.577285 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.586029 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.597312 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.610369 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.621021 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.624650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.624685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.624694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.624707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.624716 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.638854 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72a564e22fd577be491306b7693a73608a304875f327765243fed48355deb112\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:23Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:23.232803 6404 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 20:08:23.232828 6404 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 20:08:23.232847 6404 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 20:08:23.232881 6404 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 20:08:23.232915 6404 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 20:08:23.233232 6404 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 20:08:23.233318 6404 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 20:08:23.233347 6404 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:08:23.233372 6404 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:08:23.233441 6404 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.658493 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.668803 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.680570 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.692145 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.707674 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.718790 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:40Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.726139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.726169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.726192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.726210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.726221 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.828888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.828941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.828951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.828968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.828978 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.931894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.931935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.931947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.931964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:40 crc kubenswrapper[4858]: I0127 20:08:40.931976 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:40Z","lastTransitionTime":"2026-01-27T20:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.034814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.034887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.034899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.034915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.034924 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.066402 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:51:25.900939783 +0000 UTC Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.071089 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:41 crc kubenswrapper[4858]: E0127 20:08:41.071330 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.137368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.137436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.137457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.137490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.137509 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.241290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.241362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.241375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.241410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.241434 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.343747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.343803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.343815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.343839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.343850 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.446869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.446915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.446926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.446944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.446955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.487260 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/2.log" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.491914 4858 scope.go:117] "RemoveContainer" containerID="b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e" Jan 27 20:08:41 crc kubenswrapper[4858]: E0127 20:08:41.492124 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.504431 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.516482 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.526268 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.541608 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.548878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.548956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.548970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.548988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.548999 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.560429 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.575762 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.588573 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.600440 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.617145 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.626636 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.635865 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.645872 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.651442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.651471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.651479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.651493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.651503 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.659119 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.672806 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.683963 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.694740 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.705576 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.715425 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:41Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.754262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.754474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.754605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.754722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.754816 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.857016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.857068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.857081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.857101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.857114 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.959963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.960219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.960308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.960431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:41 crc kubenswrapper[4858]: I0127 20:08:41.960586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:41Z","lastTransitionTime":"2026-01-27T20:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.063804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.063848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.063862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.063877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.063889 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.067530 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:43:41.441365806 +0000 UTC Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.070791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.070957 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:42 crc kubenswrapper[4858]: E0127 20:08:42.071156 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:42 crc kubenswrapper[4858]: E0127 20:08:42.071251 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.071184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:42 crc kubenswrapper[4858]: E0127 20:08:42.071587 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.166104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.166149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.166160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.166176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.166189 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.269567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.269807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.269902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.269967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.270023 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.372667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.372721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.372735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.372758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.372782 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.476183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.476269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.476281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.476305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.476320 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.579070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.579126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.579139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.579155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.579167 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.682004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.682045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.682054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.682069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.682079 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.783842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.783890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.783901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.783918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.783930 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.886463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.886501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.886511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.886526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.886536 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.988364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.988400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.988411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.988424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:42 crc kubenswrapper[4858]: I0127 20:08:42.988432 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:42Z","lastTransitionTime":"2026-01-27T20:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.067888 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:15:19.634684633 +0000 UTC Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.070199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:43 crc kubenswrapper[4858]: E0127 20:08:43.070337 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.090648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.090702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.090714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.090731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.090745 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.193176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.193435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.193522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.193629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.193705 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.296417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.296467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.296479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.296496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.296507 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.399316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.399367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.399379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.399397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.399410 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.502112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.502159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.502173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.502191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.502208 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.604315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.604360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.604371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.604389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.604401 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.707544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.707596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.707607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.707623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.707635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.814648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.814700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.814713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.814732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.814746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.917205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.917256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.917269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.917287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:43 crc kubenswrapper[4858]: I0127 20:08:43.917299 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:43Z","lastTransitionTime":"2026-01-27T20:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.019729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.019781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.019797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.019817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.019833 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.068020 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:12:54.589417213 +0000 UTC Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.070373 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.070393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.070430 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:44 crc kubenswrapper[4858]: E0127 20:08:44.070510 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:44 crc kubenswrapper[4858]: E0127 20:08:44.070671 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:44 crc kubenswrapper[4858]: E0127 20:08:44.070868 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.122268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.122306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.122318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.122336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.122347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.224152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.224184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.224192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.224205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.224215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.327529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.327607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.327625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.327643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.327657 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.430051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.430095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.430106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.430123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.430136 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.532312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.532356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.532370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.532387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.532400 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.634475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.634513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.634522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.634535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.634563 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.737226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.737266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.737277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.737292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.737304 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.839458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.839492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.839501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.839514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.839522 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.942002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.942041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.942049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.942063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:44 crc kubenswrapper[4858]: I0127 20:08:44.942072 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:44Z","lastTransitionTime":"2026-01-27T20:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.044164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.044232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.044245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.044260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.044271 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.068784 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:06:42.051566214 +0000 UTC Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.070056 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:45 crc kubenswrapper[4858]: E0127 20:08:45.070196 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.146981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.147046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.147057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.147075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.147086 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.249209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.249255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.249270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.249285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.249294 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.351639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.351676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.351689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.351704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.351714 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.454571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.454606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.454614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.454626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.454635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.557359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.557399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.557408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.557425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.557436 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.659807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.659843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.659855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.659870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.659882 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.762565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.762605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.762615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.762628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.762638 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.864725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.864764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.864772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.864786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.864796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.967792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.967840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.967850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.967866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.967877 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.980954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.981004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.981013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.981029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:45 crc kubenswrapper[4858]: I0127 20:08:45.981038 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:45Z","lastTransitionTime":"2026-01-27T20:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:45 crc kubenswrapper[4858]: E0127 20:08:45.993345 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:45Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.000586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.000631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.000641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.000659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.000674 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.013860 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.017544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.017624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.017641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.017664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.017680 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.030255 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.033999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.034042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.034052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.034067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.034077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.044935 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.048142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.048173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.048182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.048194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.048204 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.059199 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.059469 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.069161 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:10:24.954965402 +0000 UTC Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.069489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.069711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.069850 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.069889 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.069942 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.070041 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.070065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.070103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.070117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.070365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:46 crc kubenswrapper[4858]: E0127 20:08:46.070489 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.080727 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.091761 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.101861 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.110941 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.132102 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.149868 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.163048 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.171373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.171407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.171418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.171433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.171445 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.175656 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.186216 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.199392 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.210169 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.220617 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.231715 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.244150 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.254763 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.264571 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.271836 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.273673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.273769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.273795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.273821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.273840 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.283514 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:46Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.376460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.376507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.376518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.376535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.376565 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.479344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.479386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.479399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.479414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.479425 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.581723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.581771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.581780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.581796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.581808 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.684395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.684449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.684459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.684475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.684485 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.787491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.787561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.787573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.787589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.787600 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.889678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.889718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.889726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.889740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.889749 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.992994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.993040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.993051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.993067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:46 crc kubenswrapper[4858]: I0127 20:08:46.993080 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:46Z","lastTransitionTime":"2026-01-27T20:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.070516 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:48:37.50693073 +0000 UTC Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.070652 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:47 crc kubenswrapper[4858]: E0127 20:08:47.070824 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.094795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.094843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.094856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.094871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.094880 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.197410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.197455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.197474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.197493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.197507 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.300277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.300328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.300340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.300358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.300370 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.403146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.403185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.403198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.403217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.403229 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.505083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.505115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.505123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.505137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.505145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.607136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.607342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.607388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.607406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.607419 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.709661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.709736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.709754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.709778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.709795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.812256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.812309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.812322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.812339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.812352 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.915007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.915050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.915058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.915072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:47 crc kubenswrapper[4858]: I0127 20:08:47.915082 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:47Z","lastTransitionTime":"2026-01-27T20:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.016993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.017044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.017060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.017083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.017099 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.070785 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:45:32.995512201 +0000 UTC Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.070934 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.070990 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:48 crc kubenswrapper[4858]: E0127 20:08:48.071059 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.070997 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:48 crc kubenswrapper[4858]: E0127 20:08:48.071144 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:48 crc kubenswrapper[4858]: E0127 20:08:48.071323 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.119476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.119798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.119905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.120175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.120266 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.222674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.222908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.222976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.223044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.223103 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.325165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.325199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.325208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.325221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.325231 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.427756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.427836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.427848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.427867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.427880 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.530614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.530653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.530663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.530678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.530689 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.636186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.636457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.636544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.636665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.636763 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.739066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.739110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.739120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.739136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.739147 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.842579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.842620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.842634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.842650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.842661 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.945665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.945733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.945752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.945775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:48 crc kubenswrapper[4858]: I0127 20:08:48.945789 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:48Z","lastTransitionTime":"2026-01-27T20:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.048400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.048456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.048479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.048501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.048516 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.070352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:49 crc kubenswrapper[4858]: E0127 20:08:49.070628 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.071325 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:03:26.619843218 +0000 UTC Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.151221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.151263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.151272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.151287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.151296 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.253871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.253920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.253934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.253953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.253969 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.356213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.356252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.356264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.356303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.356315 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.458822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.458862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.458874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.458890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.458900 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.561729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.562016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.562124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.562230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.562331 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.664981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.665221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.665310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.665402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.665492 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.768076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.768399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.768539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.768681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.768772 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.872446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.872487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.872498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.872516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.872528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.974599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.974648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.974663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.974681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:49 crc kubenswrapper[4858]: I0127 20:08:49.974694 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:49Z","lastTransitionTime":"2026-01-27T20:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.070131 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.070131 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:50 crc kubenswrapper[4858]: E0127 20:08:50.070574 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.070154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:50 crc kubenswrapper[4858]: E0127 20:08:50.070677 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:50 crc kubenswrapper[4858]: E0127 20:08:50.070764 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.072283 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:10:28.765517058 +0000 UTC Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.076340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.076402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.076411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.076441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.076454 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.178363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.178407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.178419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.178436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.178449 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.184740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:50 crc kubenswrapper[4858]: E0127 20:08:50.184871 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:50 crc kubenswrapper[4858]: E0127 20:08:50.184967 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:09:22.184940632 +0000 UTC m=+106.892756388 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.281116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.281150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.281159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.281171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.281181 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.383213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.383259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.383269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.383287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.383303 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.485782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.485829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.485841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.485859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.485871 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.588173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.588228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.588244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.588267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.588284 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.690388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.690431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.690442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.690461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.690474 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.792920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.792968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.792981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.792998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.793011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.895783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.895828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.895838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.895854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.895868 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.997897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.997937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.997945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.997959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:50 crc kubenswrapper[4858]: I0127 20:08:50.997969 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:50Z","lastTransitionTime":"2026-01-27T20:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.070318 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:51 crc kubenswrapper[4858]: E0127 20:08:51.070463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.073386 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:08:34.097807309 +0000 UTC Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.100448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.100505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.100516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.100533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.100561 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.202974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.203021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.203030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.203044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.203054 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.305404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.305686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.305781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.305874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.306068 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.407952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.408002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.408014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.408031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.408044 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.510797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.510828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.510837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.510850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.510860 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.614117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.614178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.614188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.614203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.614212 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.716823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.716862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.716871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.716886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.716896 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.819304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.819349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.819360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.819377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.819512 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.921225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.921263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.921272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.921286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:51 crc kubenswrapper[4858]: I0127 20:08:51.921295 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:51Z","lastTransitionTime":"2026-01-27T20:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.023952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.024001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.024012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.024029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.024040 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.070640 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.070672 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.070644 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:52 crc kubenswrapper[4858]: E0127 20:08:52.070768 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:52 crc kubenswrapper[4858]: E0127 20:08:52.070827 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:52 crc kubenswrapper[4858]: E0127 20:08:52.070922 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.073567 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:10:20.746264023 +0000 UTC Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.126614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.126659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.126671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.126687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.126698 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.229394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.229447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.229456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.229491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.229501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.332017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.332074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.332089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.332108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.332122 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.434682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.434731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.434742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.434756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.434765 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.536983 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.537025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.537041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.537059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.537068 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.639845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.639877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.639895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.639915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.639926 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.743804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.743865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.743884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.743904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.743917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.845735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.845810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.845842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.845868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.845889 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.948834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.948882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.948904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.948927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:52 crc kubenswrapper[4858]: I0127 20:08:52.948945 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:52Z","lastTransitionTime":"2026-01-27T20:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.052103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.052408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.052569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.052654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.052743 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.070877 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:53 crc kubenswrapper[4858]: E0127 20:08:53.071155 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.073989 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:14:44.174615502 +0000 UTC Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.155698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.155779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.155804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.155831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.155855 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.258395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.258437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.258463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.258479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.258489 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.360959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.361005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.361016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.361031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.361043 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.463687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.463733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.463746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.463763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.463776 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.566664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.566701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.566712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.566730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.566741 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.669169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.669199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.669209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.669221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.669230 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.771710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.771777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.771799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.771826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.771848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.873486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.873515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.873523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.873535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.873543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.976280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.976314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.976325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.976340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:53 crc kubenswrapper[4858]: I0127 20:08:53.976351 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:53Z","lastTransitionTime":"2026-01-27T20:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.070746 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.070778 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:54 crc kubenswrapper[4858]: E0127 20:08:54.070895 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.070960 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:54 crc kubenswrapper[4858]: E0127 20:08:54.071012 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:54 crc kubenswrapper[4858]: E0127 20:08:54.071094 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.074812 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:48:21.313148952 +0000 UTC Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.078416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.078454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.078463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.078474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.078484 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.181459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.181506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.181519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.181536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.181570 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.284898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.284965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.284990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.285019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.285042 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.387783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.387839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.387852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.387869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.387882 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.490308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.490354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.490365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.490382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.490393 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.592847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.592885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.592896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.592912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.592921 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.695303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.695349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.695360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.695377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.695389 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.797414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.797458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.797492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.797510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.797521 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.900649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.900709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.900719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.900753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:54 crc kubenswrapper[4858]: I0127 20:08:54.900765 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:54Z","lastTransitionTime":"2026-01-27T20:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.003104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.003144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.003153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.003167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.003181 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.070753 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:55 crc kubenswrapper[4858]: E0127 20:08:55.070901 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.072237 4858 scope.go:117] "RemoveContainer" containerID="b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e" Jan 27 20:08:55 crc kubenswrapper[4858]: E0127 20:08:55.072748 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.075807 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:57:40.908238821 +0000 UTC Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.105252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.105307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.105316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.105335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.105347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.208469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.208593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.208619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.208650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.208677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.311295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.311333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.311344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.311359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.311371 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.414491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.414632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.414666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.414700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.414723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.517563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.517608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.517617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.517632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.517641 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.621142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.621189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.621199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.621217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.621232 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.723272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.723318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.723327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.723343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.723352 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.826855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.826901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.826913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.826931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.826947 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.930109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.930167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.930178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.930199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:55 crc kubenswrapper[4858]: I0127 20:08:55.930211 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:55Z","lastTransitionTime":"2026-01-27T20:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.033271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.033344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.033367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.033412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.033443 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.070575 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.070670 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.070874 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.070918 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.071023 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.071138 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.075928 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:42:38.78428492 +0000 UTC Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.086517 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.099903 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.116743 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.130583 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.135661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.135697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.135706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.135718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.135727 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.142880 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.154321 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.163940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.163979 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.163990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.164006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.164019 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.168768 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.178999 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.182638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.182693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.182706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.182723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.182735 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.182702 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.193985 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.196126 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.197659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.198323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.198362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.198382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.198394 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.207630 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.209066 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.212018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.212059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.212071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.212088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.212100 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.217027 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.223141 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.226101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.226149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.226162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.226180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.226195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.227574 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.236688 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: E0127 20:08:56.236808 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.238280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.238326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.238340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.238357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.238369 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.239469 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.249935 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.262943 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.272717 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.288532 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.305803 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.340818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.340857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.340867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.340881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.340892 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.444112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.444146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.444154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.444167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.444175 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.532480 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/0.log" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.532761 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fea6600-49c2-4130-a506-6046f0f7760d" containerID="e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea" exitCode=1 Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.532867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerDied","Data":"e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.533264 4858 scope.go:117] "RemoveContainer" containerID="e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.546514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.546562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.546573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.546589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.546598 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.551520 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.562887 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.582816 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.601402 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.612995 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.628250 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.640105 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.648993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.649041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.649055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.649076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.649088 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.653111 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.663961 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.679173 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.690635 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.702573 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.713084 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.724343 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.733050 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.743302 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.750982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.751174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.751234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.751355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.751437 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.755131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:55Z\\\",\\\"message\\\":\\\"2026-01-27T20:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f\\\\n2026-01-27T20:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f to /host/opt/cni/bin/\\\\n2026-01-27T20:08:10Z [verbose] multus-daemon started\\\\n2026-01-27T20:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-27T20:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.765472 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:56Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.854165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.854222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.854239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.854261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.854277 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.957002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.957253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.957331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.957405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:56 crc kubenswrapper[4858]: I0127 20:08:56.957468 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:56Z","lastTransitionTime":"2026-01-27T20:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.059435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.059468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.059478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.059491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.059501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.070689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:57 crc kubenswrapper[4858]: E0127 20:08:57.070799 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.076874 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:12:58.916401208 +0000 UTC Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.161823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.162474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.162517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.162537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.162585 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.265367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.265414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.265427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.265445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.265456 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.366982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.367026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.367039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.367056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.367069 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.469391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.469476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.469487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.469504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.469517 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.538365 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/0.log" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.538433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerStarted","Data":"57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.550751 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.562490 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.572339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.572379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.572390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.572406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.572417 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.575826 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.589916 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.599389 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.610350 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.622987 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:55Z\\\",\\\"message\\\":\\\"2026-01-27T20:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f\\\\n2026-01-27T20:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f to /host/opt/cni/bin/\\\\n2026-01-27T20:08:10Z [verbose] multus-daemon started\\\\n2026-01-27T20:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-27T20:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.633348 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.646915 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.656636 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.674883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.674935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.674948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.674963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.674975 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.675100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.693620 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.704386 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.716107 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.728953 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.743484 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.757041 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.769375 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:08:57Z is after 2025-08-24T17:21:41Z" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.776853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.776899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.776912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.776928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.776976 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.879447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.879510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.879523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.879543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.879595 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.982018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.982128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.982143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.982158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:57 crc kubenswrapper[4858]: I0127 20:08:57.982169 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:57Z","lastTransitionTime":"2026-01-27T20:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.070408 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.070490 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:08:58 crc kubenswrapper[4858]: E0127 20:08:58.070523 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.070587 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:08:58 crc kubenswrapper[4858]: E0127 20:08:58.070627 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:08:58 crc kubenswrapper[4858]: E0127 20:08:58.070675 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.076998 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:00:20.227860741 +0000 UTC Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.084624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.084657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.084669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.084683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.084694 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.187879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.187914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.187925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.187938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.187949 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.291260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.291318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.291330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.291346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.291358 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.394786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.394822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.394833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.394849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.394862 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.497468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.497524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.497537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.497580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.497593 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.600571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.600618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.600631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.600647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.600659 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.702831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.702885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.702895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.702909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.702919 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.805996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.806094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.806104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.806121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.806131 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.909149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.909198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.909208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.909224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:58 crc kubenswrapper[4858]: I0127 20:08:58.909234 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:58Z","lastTransitionTime":"2026-01-27T20:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.011535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.011601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.011613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.011630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.011644 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.070115 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:08:59 crc kubenswrapper[4858]: E0127 20:08:59.070298 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.077182 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:56:25.199571158 +0000 UTC Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.114303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.114353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.114362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.114377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.114388 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.217276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.217336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.217351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.217371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.217385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.319873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.319938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.319963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.319992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.320016 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.423093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.423155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.423167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.423187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.423199 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.525714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.525770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.525780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.525796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.525806 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.627632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.627675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.627689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.627702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.627712 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.730447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.730495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.730506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.730522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.730535 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.833616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.833657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.833665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.833680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.833691 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.935902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.935937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.935946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.935959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:08:59 crc kubenswrapper[4858]: I0127 20:08:59.935968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:08:59Z","lastTransitionTime":"2026-01-27T20:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.037873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.037923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.037934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.037950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.037961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.070086 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.070124 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.070224 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:00 crc kubenswrapper[4858]: E0127 20:09:00.070321 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:00 crc kubenswrapper[4858]: E0127 20:09:00.070397 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:00 crc kubenswrapper[4858]: E0127 20:09:00.070486 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.077695 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:44:28.549680026 +0000 UTC Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.140676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.140715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.140724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.140738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.140747 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.243201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.243247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.243268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.243283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.243294 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.345839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.345943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.345957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.345978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.345989 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.448731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.448805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.448823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.448845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.448863 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.551568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.551631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.551645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.551668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.551683 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.654032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.654081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.654089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.654103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.654115 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.757282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.757326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.757335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.757352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.757364 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.860322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.860363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.860371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.860385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.860394 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.963535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.963635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.963650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.963672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:00 crc kubenswrapper[4858]: I0127 20:09:00.963694 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:00Z","lastTransitionTime":"2026-01-27T20:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.067726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.067790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.067807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.067834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.067853 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.070885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:01 crc kubenswrapper[4858]: E0127 20:09:01.071124 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.078151 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:36:54.303873837 +0000 UTC Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.170687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.170750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.170764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.170788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.170805 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.275942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.276031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.276088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.276114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.276131 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.379481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.379533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.379546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.379574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.379584 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.481708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.481746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.481754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.481767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.481776 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.584292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.584344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.584356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.584373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.584384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.687364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.687439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.687451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.687473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.687487 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.790984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.791042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.791054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.791076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.791092 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.894519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.894595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.894607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.894625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.894637 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.911533 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:09:01 crc kubenswrapper[4858]: E0127 20:09:01.911736 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:05.911698622 +0000 UTC m=+150.619514328 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.911874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.912018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:01 crc kubenswrapper[4858]: E0127 20:09:01.912052 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:09:01 crc kubenswrapper[4858]: E0127 20:09:01.912184 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:05.912155415 +0000 UTC m=+150.619971121 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:09:01 crc kubenswrapper[4858]: E0127 20:09:01.912208 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:09:01 crc kubenswrapper[4858]: E0127 20:09:01.912262 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:05.912251497 +0000 UTC m=+150.620067393 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.997713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.997768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.997778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.997798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:01 crc kubenswrapper[4858]: I0127 20:09:01.997811 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:01Z","lastTransitionTime":"2026-01-27T20:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.013600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.013693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.013869 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.013891 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.013907 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.013969 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:06.013951469 +0000 UTC m=+150.721767185 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.014074 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.014203 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.014228 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.014346 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:06.014311839 +0000 UTC m=+150.722127725 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.070337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.070337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.070501 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.070670 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.070806 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:02 crc kubenswrapper[4858]: E0127 20:09:02.070956 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.079016 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:46:46.38011009 +0000 UTC Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.100691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.100739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.100750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.100772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.100784 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.203404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.203472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.203486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.203505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.203518 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.307092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.307163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.307175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.307196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.307208 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.409463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.409510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.409522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.409539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.409573 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.512011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.512083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.512093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.512107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.512124 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.615390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.615445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.615458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.615480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.615491 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.718248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.718308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.718319 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.718334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.718367 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.821405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.821444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.821453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.821471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.821488 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.923844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.923901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.923914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.923933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:02 crc kubenswrapper[4858]: I0127 20:09:02.923945 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:02Z","lastTransitionTime":"2026-01-27T20:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.027169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.027212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.027221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.027238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.027253 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.070159 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:03 crc kubenswrapper[4858]: E0127 20:09:03.070369 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.080172 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:56:02.69257541 +0000 UTC Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.129903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.129953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.129964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.129986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.130001 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.233444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.233488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.233499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.233515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.233528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.336342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.336405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.336420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.336444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.336460 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.439370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.439418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.439429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.439445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.439456 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.542099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.542159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.542173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.542196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.542212 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.644566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.644613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.644622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.644636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.644646 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.747232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.747343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.747361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.747387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.747405 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.850371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.850450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.850460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.850473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.850482 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.953810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.953843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.953850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.953866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:03 crc kubenswrapper[4858]: I0127 20:09:03.953875 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:03Z","lastTransitionTime":"2026-01-27T20:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.055958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.056011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.056023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.056041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.056053 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.070830 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.070855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:04 crc kubenswrapper[4858]: E0127 20:09:04.070954 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.071079 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:04 crc kubenswrapper[4858]: E0127 20:09:04.071239 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:04 crc kubenswrapper[4858]: E0127 20:09:04.071337 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.080983 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:56:27.262429197 +0000 UTC Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.158208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.158257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.158268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.158288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.158301 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.260775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.260825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.260843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.260860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.260882 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.363025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.363076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.363095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.363112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.363126 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.465774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.465807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.465816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.465830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.465840 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.567481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.567530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.567540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.567573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.567586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.670118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.670195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.670215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.670239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.670256 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.773170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.773227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.773236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.773251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.773259 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.875772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.875835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.875855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.875880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.875898 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.978871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.978914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.978939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.978958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:04 crc kubenswrapper[4858]: I0127 20:09:04.978969 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:04Z","lastTransitionTime":"2026-01-27T20:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.070598 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:05 crc kubenswrapper[4858]: E0127 20:09:05.070736 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.081070 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:00:48.41433845 +0000 UTC Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.081096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.081123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.081135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.081163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.081174 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.183278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.183348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.183370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.183397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.183414 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.285515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.285605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.285630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.285655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.285671 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.387788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.387826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.387860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.387876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.387884 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.491208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.491284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.491295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.491311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.491322 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.593697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.593734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.593743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.593770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.593780 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.696803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.696859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.696877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.696905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.696923 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.800094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.800153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.800169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.800191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.800209 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.902766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.902819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.902836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.902858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:05 crc kubenswrapper[4858]: I0127 20:09:05.902878 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:05Z","lastTransitionTime":"2026-01-27T20:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.006883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.006940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.006959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.006981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.006997 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.070900 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.070965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.071148 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.071171 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.071279 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.071492 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.081226 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:42:46.788617829 +0000 UTC Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.095522 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:55Z\\\",\\\"message\\\":\\\"2026-01-27T20:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f\\\\n2026-01-27T20:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f to /host/opt/cni/bin/\\\\n2026-01-27T20:08:10Z [verbose] multus-daemon started\\\\n2026-01-27T20:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-27T20:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.109417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.109497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.109516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.109541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.109588 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.113110 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.129857 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.143809 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.156958 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.175032 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.189339 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.209741 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.211860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.211889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.211898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.211914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.211924 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.233125 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.249582 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.262886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.276421 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.291352 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.308013 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.314389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.314442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.314455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.314473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.314487 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.321347 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.334246 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.349321 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.364058 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.417567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.417618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.417630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.417649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.417663 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.479243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.479299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.479312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.479332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.479347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.494183 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.498114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.498180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.498192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.498217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.498237 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.512524 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.516670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.516735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.516752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.516776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.516796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.528823 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.532031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.532152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.532218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.532297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.532369 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.547067 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.551997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.552039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.552048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.552064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.552075 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.567324 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T20:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b322549-2745-4c40-a90f-d799751df1f2\\\",\\\"systemUUID\\\":\\\"e10118a3-8956-4599-b1a5-221ab0a35848\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:06Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:06 crc kubenswrapper[4858]: E0127 20:09:06.567488 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.569313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.569365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.569375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.569391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.569402 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.671823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.671858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.671868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.671881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.671891 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.774018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.774068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.774077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.774092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.774104 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.876729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.876780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.876792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.876809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.876823 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.979157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.979199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.979208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.979227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:06 crc kubenswrapper[4858]: I0127 20:09:06.979238 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:06Z","lastTransitionTime":"2026-01-27T20:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.070888 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:07 crc kubenswrapper[4858]: E0127 20:09:07.071010 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.081295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.081336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.081347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.081361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.081373 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.081372 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:43:18.201342552 +0000 UTC Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.184182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.184244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.184261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.184282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.184300 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.286806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.286891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.286916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.286949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.286973 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.390404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.390526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.390545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.390603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.390626 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.493135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.493210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.493229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.493254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.493272 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.595908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.595969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.595985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.596008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.596022 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.699440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.699505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.699519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.699540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.699577 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.802391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.802429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.802438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.802452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.802461 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.904776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.904825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.904841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.904858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:07 crc kubenswrapper[4858]: I0127 20:09:07.904869 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:07Z","lastTransitionTime":"2026-01-27T20:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.007883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.007955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.007966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.007991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.008009 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.070570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.070693 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:08 crc kubenswrapper[4858]: E0127 20:09:08.070780 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.070847 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:08 crc kubenswrapper[4858]: E0127 20:09:08.070891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:08 crc kubenswrapper[4858]: E0127 20:09:08.071086 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.082521 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:31:29.085758646 +0000 UTC Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.111883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.111940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.111952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.111982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.111999 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.215795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.215888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.215908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.215936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.215958 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.318034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.318099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.318115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.318133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.318145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.420838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.420894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.420905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.420927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.420941 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.524510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.524614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.524628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.524653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.524666 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.627064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.627117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.627125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.627138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.627146 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.729872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.729906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.729915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.729928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.729938 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.833095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.833163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.833188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.833217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.833239 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.936374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.936416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.936425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.936438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:08 crc kubenswrapper[4858]: I0127 20:09:08.936448 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:08Z","lastTransitionTime":"2026-01-27T20:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.038880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.038928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.038943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.038960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.038974 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.070315 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:09 crc kubenswrapper[4858]: E0127 20:09:09.070487 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.083646 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:39:45.653250283 +0000 UTC Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.142143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.142202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.142227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.142252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.142271 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.246091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.246133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.246143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.246159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.246168 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.349684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.349739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.349757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.349781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.349799 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.452931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.452996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.453009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.453033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.453047 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.555831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.555897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.555909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.556122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.556136 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.658460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.658500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.658509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.658523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.658531 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.760948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.760998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.761010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.761026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.761040 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.864752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.864814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.864831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.864854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.864872 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.967693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.967735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.967744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.967761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:09 crc kubenswrapper[4858]: I0127 20:09:09.967773 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:09Z","lastTransitionTime":"2026-01-27T20:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.069771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.069808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.069818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.069833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.069843 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.069916 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.069948 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:10 crc kubenswrapper[4858]: E0127 20:09:10.070016 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:10 crc kubenswrapper[4858]: E0127 20:09:10.070177 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.070227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:10 crc kubenswrapper[4858]: E0127 20:09:10.070603 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.071768 4858 scope.go:117] "RemoveContainer" containerID="b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.084483 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:31:49.565876415 +0000 UTC Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.085501 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.173200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.173266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.173283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.173309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.173332 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.276755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.276798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.276810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.276828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.276842 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.379743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.379780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.379788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.379803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.379813 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.483358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.483411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.483423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.483444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.483460 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.583936 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/2.log" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.586682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.587402 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.588321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.588367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.588378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.588394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.588409 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.611600 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.630095 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.649531 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.661353 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.675131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.684373 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.690030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.690066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.690076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.690093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.690103 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.705767 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.721180 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.735784 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.749474 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.764885 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.781855 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.792600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.792678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.792687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.792703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.792713 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.796453 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.807526 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.819268 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.829856 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2b148f-8245-4714-893f-639c4ef1f4a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33d0a534dcd9e97a73f9b9fb89b269a118c6cd9d353f36be6699946cf46a8651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.841195 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.852538 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:55Z\\\",\\\"message\\\":\\\"2026-01-27T20:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f\\\\n2026-01-27T20:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f to /host/opt/cni/bin/\\\\n2026-01-27T20:08:10Z [verbose] multus-daemon started\\\\n2026-01-27T20:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-27T20:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.863876 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:10Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.894874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.894919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.894931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.894945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.894953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.997237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.997297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.997308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.997323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:10 crc kubenswrapper[4858]: I0127 20:09:10.997332 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:10Z","lastTransitionTime":"2026-01-27T20:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.069952 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:11 crc kubenswrapper[4858]: E0127 20:09:11.070103 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.085189 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:04:37.634584321 +0000 UTC Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.100138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.100181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.100194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.100210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.100222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.203234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.203277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.203288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.203303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.203314 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.305821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.305866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.305875 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.305888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.305899 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.408314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.408365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.408391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.408417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.408430 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.511194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.511260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.511269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.511282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.511291 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.593065 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/3.log" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.593953 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/2.log" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.597170 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" exitCode=1 Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.597224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.597278 4858 scope.go:117] "RemoveContainer" containerID="b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.598122 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:09:11 crc kubenswrapper[4858]: E0127 20:09:11.598365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.613827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.613873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.613885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.613901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.613915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.619691 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.633816 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.645498 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.655866 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.676265 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.700584 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.716683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.716718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.716727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.716741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.716751 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.719081 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:55Z\\\",\\\"message\\\":\\\"2026-01-27T20:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f\\\\n2026-01-27T20:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f to /host/opt/cni/bin/\\\\n2026-01-27T20:08:10Z [verbose] multus-daemon started\\\\n2026-01-27T20:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-27T20:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.732159 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.742797 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2b148f-8245-4714-893f-639c4ef1f4a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33d0a534dcd9e97a73f9b9fb89b269a118c6cd9d353f36be6699946cf46a8651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.757389 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.770273 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.780501 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.792992 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.803165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.818377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.818413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.818424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.818438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.818449 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.821951 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7c8571fd5e25efcd05cfaa476ddd9944d8b8e2f1de77e5215350939f032be3e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:39Z\\\",\\\"message\\\":\\\" 6612 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.549402 6612 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 20:08:39.549426 6612 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-855m5 in node crc\\\\nI0127 20:08:39.549434 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-855m5 after 0 failed attempt(s)\\\\nI0127 20:08:39.549441 6612 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-855m5\\\\nI0127 20:08:39.548953 6612 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0127 20:08:39.549458 6612 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0127 20:08:39.549463 6612 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0127 20:08:39.549489 6612 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:09:11Z\\\",\\\"message\\\":\\\"051 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.079877 7051 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.080005 7051 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.080050 7051 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.081865 7051 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 20:09:11.081907 7051 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 20:09:11.081949 7051 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 20:09:11.083680 7051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 20:09:11.083769 7051 factory.go:656] Stopping watch factory\\\\nI0127 20:09:11.112570 7051 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 20:09:11.112600 7051 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 20:09:11.112659 7051 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:09:11.112705 7051 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:09:11.112827 7051 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:09:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.839759 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.853307 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.870196 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.883092 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:11Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.921051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.921084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.921095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.921109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:11 crc kubenswrapper[4858]: I0127 20:09:11.921120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:11Z","lastTransitionTime":"2026-01-27T20:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.022803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.022839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.022847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.022861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.022870 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.070648 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.070751 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:12 crc kubenswrapper[4858]: E0127 20:09:12.070792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:12 crc kubenswrapper[4858]: E0127 20:09:12.070880 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.070940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:12 crc kubenswrapper[4858]: E0127 20:09:12.071122 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.086179 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:58:01.320473563 +0000 UTC Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.128675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.128740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.128757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.128781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.128797 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.231988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.232047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.232065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.232088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.232109 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.340420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.340489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.340503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.340521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.340534 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.442689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.442732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.442743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.442758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.442769 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.544579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.544664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.544679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.544695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.544705 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.602147 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/3.log" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.604864 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:09:12 crc kubenswrapper[4858]: E0127 20:09:12.605090 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.618240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.630517 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-855m5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fea6600-49c2-4130-a506-6046f0f7760d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:08:55Z\\\",\\\"message\\\":\\\"2026-01-27T20:08:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f\\\\n2026-01-27T20:08:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_85a52741-4d77-4c3a-b812-eaba48f0d56f to /host/opt/cni/bin/\\\\n2026-01-27T20:08:10Z [verbose] multus-daemon started\\\\n2026-01-27T20:08:10Z [verbose] Readiness Indicator file check\\\\n2026-01-27T20:08:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r7sr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-855m5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.644054 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccbad9b1-e4e8-484e-908d-1695372441e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d59864afaf59196af835a085ad64465dd99e0af5128326cfec03413944bf58ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9643d85fcdccf4d08f922406c5d8f452d26ea4990cc2014a996340bc2e69bd6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9tbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wxhcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.647377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.647415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.647437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.647464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.647480 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.655071 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2b148f-8245-4714-893f-639c4ef1f4a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33d0a534dcd9e97a73f9b9fb89b269a118c6cd9d353f36be6699946cf46a8651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1937696495b8d306a64f1efcdab4efa50eeafd76c0352b78e4d2d4b43c3bcd84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.665513 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6e9abcc-3467-43d4-809d-d4d9c3d19a17\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bcbab522a48af8a7103c1e3c0a2bf06df8763675f2f39b24f559d3a40ae32e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ee7f8cb8cd1313fee38d658392c84878c4f22e406e5b48926b09a362999077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d11991cd32eec68e9104c1f58fc2bd7d2f78a38e0f3217d4dd1bbc52038bed63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://052679c708a30a543d32a804a9c63993e95f71f6e8ab9bfdb6890d0b6a1c2828\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.676948 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63533222e3105ed0abad6c139ba065209ab65da18925f9a85a88adb65ca3b939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.688869 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50837e4c-bd24-4b62-b1e7-b586e702bd40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efa3aef12041e22be55d710252c4c47c8c095fbb710eb99972d08c2fbf85d939\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lp4vf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-psxnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.702162 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fe084c8-3445-4507-b00f-8c8e6d101426\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22d8353b3a5676ae911aded353cc9451bdcb2189222a9b344c419aa51aea21af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f702bf7050302a4a2c840872d7206be063fd58f85394dd6f362e5ba4d59cf5bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a834ef895ffa5c4411d755791ea640c48830a12d0d630b091e71d0d70f383566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1f9b4cc0be4563f273aa9a1d4779a5c52e2af02bfeafe8adcdf2b92868038593\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f29741276f7270141c87809da48536ad7262046da12e589a80e50e4948e11157\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0f7244c8279627d1ae0523d44f343cb0a1b7416a1f4ee460c64197b7dd1d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98e41fa46ad9b5e56ca04a1bf3f292ef854823262f028df15be08ee660b8b9e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jp25d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d2vhz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.711891 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9d7sv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"02269db9-8212-4591-aa62-f135bf69231c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3554cdc1f3d58b91e48083b90e30ef85db2abddf36bd5eb2aae628cd1b63b772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqswc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9d7sv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.729752 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cda3ac1-7db7-4215-a301-b757743bff59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T20:09:11Z\\\",\\\"message\\\":\\\"051 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.079877 7051 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.080005 7051 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.080050 7051 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 20:09:11.081865 7051 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 20:09:11.081907 7051 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 20:09:11.081949 7051 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 20:09:11.083680 7051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 20:09:11.083769 7051 factory.go:656] Stopping watch factory\\\\nI0127 20:09:11.112570 7051 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0127 20:09:11.112600 7051 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0127 20:09:11.112659 7051 ovnkube.go:599] Stopped ovnkube\\\\nI0127 20:09:11.112705 7051 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 20:09:11.112827 7051 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:09:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p24pj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rsk7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.746400 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7734690a-15b1-4f85-899c-0efa6d162328\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c79ff4a691098666a7750ddd3974dd8125ab66e326c9bf1abbfd816a1fa67f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cefa1723b17ac4053215a415fdfc9ed52f06e3bd4cf5626ee024a9f28a1eb32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d1964cd9c3299945b80fad294dec6e9ee3e9ba6a3a6f8efd8e601502c4ae4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ed563cb5947b8217bc1258b9931576e6e121cdcd4a01d5c48c8848328819cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19764b907a3346aebe40082e193eda2c5d6cdd93c35a0fcafc60c90b32c250ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c6777f3e5df9ef44bc8c0b1f216859a7a5c6f114db0398ecf4a1982a4886055\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa1ab5b3806b7195b081f44ad3a1278ed24a20fd730844db62fc2622c9efb0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fa91fd3a738147a469a4f1f148523d7656a9f96ea52c30de8dedbc33e9ef170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.749781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.749839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.749852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.749874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.749886 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.756770 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a736024525fb90e80299b35f080913e3a635456800e4237af35bc614379d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.767481 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.779139 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.792237 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba6fd0fb-9d26-4065-860e-f23aedfd4886\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe8801c97cd01d870aeb9926e17d7f3e0f4570523c963f23437aa6c0e5603db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e84b749da87ee728e2a18936609273ff13ee0bedf27b7d51229d7694932f0f10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83453688ed300dada1f86bc71d353e1c2839dd43a2aec8b91d8631ee5b29b692\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.806898 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://985cd57101a805e51fd0387db72fe39c37c59ff58b6857a2f7c737b491f71c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ff7d75c6f994c4325abb80f49ec2a6036e0cff356ec473e20a03cb76e0637d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.818076 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lqbtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef638e59-7a7d-44a7-b6ae-f8b87b52fc68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c7411c046d6ab5ab0444559b4fe17f906df7936924845e639abddc8c21ad04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dzk5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lqbtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.829999 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bvrdq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:08:18Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j5hlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.843589 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88aaef03-76aa-447e-98ee-ca909788fbdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:08:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T20:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T20:07:57Z\\\",\\\"message\\\":\\\"espace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 20:07:44.833307 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 20:07:44.833959 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-874558094/tls.crt::/tmp/serving-cert-874558094/tls.key\\\\\\\"\\\\nI0127 20:07:57.727322 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 20:07:57.729770 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 20:07:57.729791 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 20:07:57.729812 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 20:07:57.729817 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 20:07:57.736076 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 20:07:57.736115 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736121 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 20:07:57.736127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 20:07:57.736131 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 20:07:57.736135 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nI0127 20:07:57.736121 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 20:07:57.736139 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 20:07:57.738981 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T20:07:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T20:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T20:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T20:07:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T20:09:12Z is after 2025-08-24T17:21:41Z" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.852224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.852261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.852270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.852286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.852297 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.954618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.954651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.954664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.954681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:12 crc kubenswrapper[4858]: I0127 20:09:12.954693 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:12Z","lastTransitionTime":"2026-01-27T20:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.056960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.057020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.057038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.057061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.057078 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.070291 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:13 crc kubenswrapper[4858]: E0127 20:09:13.070405 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.086939 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:52:49.723320432 +0000 UTC Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.159583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.159616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.159626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.159645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.159656 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.262041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.262087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.262097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.262112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.262123 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.364064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.364100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.364112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.364125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.364133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.466465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.466510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.466526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.466568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.466581 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.569295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.569343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.569359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.569379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.569395 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.672628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.672680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.672694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.672717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.672733 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.775340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.775412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.775430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.775459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.775477 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.879318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.879360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.879368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.879408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.879423 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.981726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.981767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.981775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.981789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:13 crc kubenswrapper[4858]: I0127 20:09:13.981800 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:13Z","lastTransitionTime":"2026-01-27T20:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.071162 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:14 crc kubenswrapper[4858]: E0127 20:09:14.071255 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.071509 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:14 crc kubenswrapper[4858]: E0127 20:09:14.071601 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.071714 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:14 crc kubenswrapper[4858]: E0127 20:09:14.071786 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.084644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.084687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.084698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.084712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.084721 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.087862 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:10:29.512915646 +0000 UTC Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.188473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.188594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.188620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.188651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.188673 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.291266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.291327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.291338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.291358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.291371 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.394454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.394759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.394770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.394785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.394795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.497027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.497089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.497100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.497115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.497125 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.599971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.600023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.600038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.600059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.600075 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.702324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.702370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.702380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.702395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.702410 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.805810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.805879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.805891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.805909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.805923 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.908270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.908535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.908654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.908762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:14 crc kubenswrapper[4858]: I0127 20:09:14.908864 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:14Z","lastTransitionTime":"2026-01-27T20:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.012838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.013263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.013494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.013761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.013987 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.069967 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:15 crc kubenswrapper[4858]: E0127 20:09:15.070325 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.088201 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:58:09.069223748 +0000 UTC Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.117175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.117431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.117679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.117917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.118120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.221223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.221270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.221281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.221299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.221310 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.323990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.324051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.324068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.324091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.324108 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.427159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.427253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.427283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.427324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.427352 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.530423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.530819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.530991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.531108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.531229 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.633468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.633507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.633518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.633542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.633574 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.736659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.736703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.736712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.736729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.736739 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.839709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.839761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.839774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.839796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.839811 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.942380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.942440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.942449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.942464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:15 crc kubenswrapper[4858]: I0127 20:09:15.942475 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:15Z","lastTransitionTime":"2026-01-27T20:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.044727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.044786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.044801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.044822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.044839 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:16Z","lastTransitionTime":"2026-01-27T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.070795 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:16 crc kubenswrapper[4858]: E0127 20:09:16.070939 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.071014 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:16 crc kubenswrapper[4858]: E0127 20:09:16.071278 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.071285 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:16 crc kubenswrapper[4858]: E0127 20:09:16.071676 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.089278 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:09:59.60666789 +0000 UTC Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.122135 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.122114801 podStartE2EDuration="1m18.122114801s" podCreationTimestamp="2026-01-27 20:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.121441952 +0000 UTC m=+100.829257688" watchObservedRunningTime="2026-01-27 20:09:16.122114801 +0000 UTC m=+100.829930507" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.147787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.148044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.148163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.148253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.148337 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:16Z","lastTransitionTime":"2026-01-27T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.153622 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.153598485 podStartE2EDuration="1m16.153598485s" podCreationTimestamp="2026-01-27 20:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.13884714 +0000 UTC m=+100.846662856" watchObservedRunningTime="2026-01-27 20:09:16.153598485 +0000 UTC m=+100.861414211" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.181621 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lqbtf" podStartSLOduration=73.181598253 podStartE2EDuration="1m13.181598253s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.165609245 +0000 UTC m=+100.873424981" watchObservedRunningTime="2026-01-27 20:09:16.181598253 +0000 UTC m=+100.889413959" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.195505 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.195482145 podStartE2EDuration="6.195482145s" podCreationTimestamp="2026-01-27 20:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.181587563 +0000 UTC m=+100.889403289" watchObservedRunningTime="2026-01-27 20:09:16.195482145 +0000 UTC m=+100.903297851" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.231009 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wxhcn" podStartSLOduration=72.230979239 podStartE2EDuration="1m12.230979239s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.230853705 +0000 UTC m=+100.938669431" watchObservedRunningTime="2026-01-27 20:09:16.230979239 +0000 UTC m=+100.938794945" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.231342 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-855m5" podStartSLOduration=72.231334699 podStartE2EDuration="1m12.231334699s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.215900315 +0000 UTC m=+100.923716021" watchObservedRunningTime="2026-01-27 20:09:16.231334699 +0000 UTC m=+100.939150395" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.251392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.251455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.251466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.251493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.251507 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:16Z","lastTransitionTime":"2026-01-27T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.258977 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d2vhz" podStartSLOduration=72.258950517 podStartE2EDuration="1m12.258950517s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.25870715 +0000 UTC m=+100.966522856" watchObservedRunningTime="2026-01-27 20:09:16.258950517 +0000 UTC m=+100.966766213" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.272411 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9d7sv" podStartSLOduration=73.272390735 podStartE2EDuration="1m13.272390735s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.271685466 +0000 UTC m=+100.979501182" watchObservedRunningTime="2026-01-27 20:09:16.272390735 +0000 UTC m=+100.980206441" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.321092 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.321069062 podStartE2EDuration="1m18.321069062s" podCreationTimestamp="2026-01-27 20:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.319722625 +0000 UTC m=+101.027538341" watchObservedRunningTime="2026-01-27 20:09:16.321069062 +0000 UTC m=+101.028884768" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.355279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.355731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.355856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.356045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.356213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:16Z","lastTransitionTime":"2026-01-27T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.356438 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.356420612 podStartE2EDuration="48.356420612s" podCreationTimestamp="2026-01-27 20:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.338840849 +0000 UTC m=+101.046656555" watchObservedRunningTime="2026-01-27 20:09:16.356420612 +0000 UTC m=+101.064236318" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.389486 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podStartSLOduration=73.389464309 podStartE2EDuration="1m13.389464309s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:16.374960571 +0000 UTC m=+101.082776277" watchObservedRunningTime="2026-01-27 20:09:16.389464309 +0000 UTC m=+101.097280035" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.458596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.458637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.458647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.458662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.458671 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:16Z","lastTransitionTime":"2026-01-27T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.560918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.560952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.560961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.560973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.560981 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:16Z","lastTransitionTime":"2026-01-27T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.634687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.634724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.634732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.634746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.634758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T20:09:16Z","lastTransitionTime":"2026-01-27T20:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.674699 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc"] Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.675107 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.677498 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.677810 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.677997 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.679001 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.762947 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.763008 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.763040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.763198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.763261 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.864865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.864922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.864948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.865007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.865035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.865106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.865150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.866106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.870871 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.884138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/88501d23-dd60-43b5-8ecb-4a58aa0bc71c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nbshc\" (UID: \"88501d23-dd60-43b5-8ecb-4a58aa0bc71c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:16 crc kubenswrapper[4858]: I0127 20:09:16.988531 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" Jan 27 20:09:17 crc kubenswrapper[4858]: W0127 20:09:17.009514 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88501d23_dd60_43b5_8ecb_4a58aa0bc71c.slice/crio-2d7bb966afd4aa0ea2850bcf1240ccbc957df206898d4445d21b2a446281f131 WatchSource:0}: Error finding container 2d7bb966afd4aa0ea2850bcf1240ccbc957df206898d4445d21b2a446281f131: Status 404 returned error can't find the container with id 2d7bb966afd4aa0ea2850bcf1240ccbc957df206898d4445d21b2a446281f131 Jan 27 20:09:17 crc kubenswrapper[4858]: I0127 20:09:17.070371 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:17 crc kubenswrapper[4858]: E0127 20:09:17.070985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:17 crc kubenswrapper[4858]: I0127 20:09:17.089875 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:28:17.066122145 +0000 UTC Jan 27 20:09:17 crc kubenswrapper[4858]: I0127 20:09:17.089963 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 20:09:17 crc kubenswrapper[4858]: I0127 20:09:17.101862 4858 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 20:09:17 crc kubenswrapper[4858]: I0127 20:09:17.619633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" event={"ID":"88501d23-dd60-43b5-8ecb-4a58aa0bc71c","Type":"ContainerStarted","Data":"e34dc29ee67d653e688ca25804860175b75c75f6d25eedda0c8d469b31f692c3"} Jan 27 20:09:17 crc kubenswrapper[4858]: I0127 20:09:17.619687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" event={"ID":"88501d23-dd60-43b5-8ecb-4a58aa0bc71c","Type":"ContainerStarted","Data":"2d7bb966afd4aa0ea2850bcf1240ccbc957df206898d4445d21b2a446281f131"} Jan 27 20:09:17 crc kubenswrapper[4858]: I0127 20:09:17.633324 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nbshc" podStartSLOduration=74.63329943 podStartE2EDuration="1m14.63329943s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:09:17.632184629 +0000 UTC m=+102.340000345" watchObservedRunningTime="2026-01-27 20:09:17.63329943 +0000 UTC m=+102.341115156" Jan 27 20:09:18 crc kubenswrapper[4858]: I0127 20:09:18.070878 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:18 crc kubenswrapper[4858]: I0127 20:09:18.070979 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:18 crc kubenswrapper[4858]: E0127 20:09:18.071043 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:18 crc kubenswrapper[4858]: I0127 20:09:18.070907 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:18 crc kubenswrapper[4858]: E0127 20:09:18.071185 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:18 crc kubenswrapper[4858]: E0127 20:09:18.071231 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:19 crc kubenswrapper[4858]: I0127 20:09:19.070800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:19 crc kubenswrapper[4858]: E0127 20:09:19.071360 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:20 crc kubenswrapper[4858]: I0127 20:09:20.070944 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:20 crc kubenswrapper[4858]: I0127 20:09:20.071003 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:20 crc kubenswrapper[4858]: E0127 20:09:20.071126 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:20 crc kubenswrapper[4858]: I0127 20:09:20.071226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:20 crc kubenswrapper[4858]: E0127 20:09:20.071395 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:20 crc kubenswrapper[4858]: E0127 20:09:20.071500 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:21 crc kubenswrapper[4858]: I0127 20:09:21.070520 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:21 crc kubenswrapper[4858]: E0127 20:09:21.070707 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:22 crc kubenswrapper[4858]: I0127 20:09:22.070816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:22 crc kubenswrapper[4858]: I0127 20:09:22.070855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:22 crc kubenswrapper[4858]: I0127 20:09:22.070926 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:22 crc kubenswrapper[4858]: E0127 20:09:22.071675 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:22 crc kubenswrapper[4858]: E0127 20:09:22.071788 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:22 crc kubenswrapper[4858]: E0127 20:09:22.071927 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:22 crc kubenswrapper[4858]: I0127 20:09:22.223535 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:22 crc kubenswrapper[4858]: E0127 20:09:22.223720 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:09:22 crc kubenswrapper[4858]: E0127 20:09:22.223836 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs podName:3fa7e9cb-b195-401a-b57c-bdb47f36ffb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.223807907 +0000 UTC m=+170.931623653 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs") pod "network-metrics-daemon-j5hlm" (UID: "3fa7e9cb-b195-401a-b57c-bdb47f36ffb8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 20:09:23 crc kubenswrapper[4858]: I0127 20:09:23.070025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:23 crc kubenswrapper[4858]: E0127 20:09:23.070257 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:24 crc kubenswrapper[4858]: I0127 20:09:24.069987 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:24 crc kubenswrapper[4858]: I0127 20:09:24.069990 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:24 crc kubenswrapper[4858]: I0127 20:09:24.069983 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:24 crc kubenswrapper[4858]: E0127 20:09:24.070411 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:24 crc kubenswrapper[4858]: E0127 20:09:24.070480 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:24 crc kubenswrapper[4858]: E0127 20:09:24.070159 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:25 crc kubenswrapper[4858]: I0127 20:09:25.070280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:25 crc kubenswrapper[4858]: E0127 20:09:25.070463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:25 crc kubenswrapper[4858]: I0127 20:09:25.071133 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:09:25 crc kubenswrapper[4858]: E0127 20:09:25.071295 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:09:26 crc kubenswrapper[4858]: I0127 20:09:26.070507 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:26 crc kubenswrapper[4858]: I0127 20:09:26.070592 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:26 crc kubenswrapper[4858]: I0127 20:09:26.070616 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:26 crc kubenswrapper[4858]: E0127 20:09:26.071456 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:26 crc kubenswrapper[4858]: E0127 20:09:26.071531 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:26 crc kubenswrapper[4858]: E0127 20:09:26.071629 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:27 crc kubenswrapper[4858]: I0127 20:09:27.070747 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:27 crc kubenswrapper[4858]: E0127 20:09:27.071123 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:28 crc kubenswrapper[4858]: I0127 20:09:28.071049 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:28 crc kubenswrapper[4858]: I0127 20:09:28.071122 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:28 crc kubenswrapper[4858]: E0127 20:09:28.071271 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:28 crc kubenswrapper[4858]: I0127 20:09:28.071288 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:28 crc kubenswrapper[4858]: E0127 20:09:28.071391 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:28 crc kubenswrapper[4858]: E0127 20:09:28.071477 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:29 crc kubenswrapper[4858]: I0127 20:09:29.070185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:29 crc kubenswrapper[4858]: E0127 20:09:29.070417 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:30 crc kubenswrapper[4858]: I0127 20:09:30.070901 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:30 crc kubenswrapper[4858]: I0127 20:09:30.071000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:30 crc kubenswrapper[4858]: E0127 20:09:30.071119 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:30 crc kubenswrapper[4858]: E0127 20:09:30.071296 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:30 crc kubenswrapper[4858]: I0127 20:09:30.071505 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:30 crc kubenswrapper[4858]: E0127 20:09:30.071700 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:31 crc kubenswrapper[4858]: I0127 20:09:31.070364 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:31 crc kubenswrapper[4858]: E0127 20:09:31.070721 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:32 crc kubenswrapper[4858]: I0127 20:09:32.070623 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:32 crc kubenswrapper[4858]: I0127 20:09:32.070673 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:32 crc kubenswrapper[4858]: I0127 20:09:32.070853 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:32 crc kubenswrapper[4858]: E0127 20:09:32.071020 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:32 crc kubenswrapper[4858]: E0127 20:09:32.071344 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:32 crc kubenswrapper[4858]: E0127 20:09:32.071449 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:33 crc kubenswrapper[4858]: I0127 20:09:33.070897 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:33 crc kubenswrapper[4858]: E0127 20:09:33.071019 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:34 crc kubenswrapper[4858]: I0127 20:09:34.070865 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:34 crc kubenswrapper[4858]: I0127 20:09:34.071025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:34 crc kubenswrapper[4858]: E0127 20:09:34.071157 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:34 crc kubenswrapper[4858]: E0127 20:09:34.071255 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:34 crc kubenswrapper[4858]: I0127 20:09:34.070911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:34 crc kubenswrapper[4858]: E0127 20:09:34.071839 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:35 crc kubenswrapper[4858]: I0127 20:09:35.070433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:35 crc kubenswrapper[4858]: E0127 20:09:35.070649 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:36 crc kubenswrapper[4858]: E0127 20:09:36.060412 4858 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 20:09:36 crc kubenswrapper[4858]: I0127 20:09:36.070258 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:36 crc kubenswrapper[4858]: I0127 20:09:36.070353 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:36 crc kubenswrapper[4858]: E0127 20:09:36.073737 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:36 crc kubenswrapper[4858]: I0127 20:09:36.073842 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:36 crc kubenswrapper[4858]: E0127 20:09:36.074036 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:36 crc kubenswrapper[4858]: E0127 20:09:36.074019 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:36 crc kubenswrapper[4858]: E0127 20:09:36.168221 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 20:09:37 crc kubenswrapper[4858]: I0127 20:09:37.070747 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:37 crc kubenswrapper[4858]: E0127 20:09:37.070953 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:37 crc kubenswrapper[4858]: I0127 20:09:37.071620 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:09:37 crc kubenswrapper[4858]: E0127 20:09:37.071777 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:09:38 crc kubenswrapper[4858]: I0127 20:09:38.070440 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:38 crc kubenswrapper[4858]: I0127 20:09:38.070453 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:38 crc kubenswrapper[4858]: E0127 20:09:38.071223 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:38 crc kubenswrapper[4858]: I0127 20:09:38.070514 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:38 crc kubenswrapper[4858]: E0127 20:09:38.071459 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:38 crc kubenswrapper[4858]: E0127 20:09:38.071456 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:39 crc kubenswrapper[4858]: I0127 20:09:39.070723 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:39 crc kubenswrapper[4858]: E0127 20:09:39.070904 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:40 crc kubenswrapper[4858]: I0127 20:09:40.070471 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:40 crc kubenswrapper[4858]: I0127 20:09:40.070512 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:40 crc kubenswrapper[4858]: I0127 20:09:40.070730 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:40 crc kubenswrapper[4858]: E0127 20:09:40.070727 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:40 crc kubenswrapper[4858]: E0127 20:09:40.070922 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:40 crc kubenswrapper[4858]: E0127 20:09:40.071094 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:41 crc kubenswrapper[4858]: I0127 20:09:41.070034 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:41 crc kubenswrapper[4858]: E0127 20:09:41.070243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:41 crc kubenswrapper[4858]: E0127 20:09:41.170014 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.071089 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.071170 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:42 crc kubenswrapper[4858]: E0127 20:09:42.071246 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.071084 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:42 crc kubenswrapper[4858]: E0127 20:09:42.071349 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:42 crc kubenswrapper[4858]: E0127 20:09:42.071619 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.710063 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/1.log" Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.710728 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/0.log" Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.710798 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fea6600-49c2-4130-a506-6046f0f7760d" containerID="57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f" exitCode=1 Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.710841 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerDied","Data":"57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f"} Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.710893 4858 scope.go:117] "RemoveContainer" containerID="e003c4dd00b95d4bc0196215c58c314c11cdcfb76e8de3f16f9f9c99fb0f68ea" Jan 27 20:09:42 crc kubenswrapper[4858]: I0127 20:09:42.711877 4858 scope.go:117] "RemoveContainer" containerID="57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f" Jan 27 20:09:42 crc kubenswrapper[4858]: E0127 20:09:42.712273 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-855m5_openshift-multus(0fea6600-49c2-4130-a506-6046f0f7760d)\"" pod="openshift-multus/multus-855m5" podUID="0fea6600-49c2-4130-a506-6046f0f7760d" Jan 27 20:09:43 crc kubenswrapper[4858]: I0127 20:09:43.070066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:43 crc kubenswrapper[4858]: E0127 20:09:43.070313 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:43 crc kubenswrapper[4858]: I0127 20:09:43.716818 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/1.log" Jan 27 20:09:44 crc kubenswrapper[4858]: I0127 20:09:44.070180 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:44 crc kubenswrapper[4858]: I0127 20:09:44.070189 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:44 crc kubenswrapper[4858]: E0127 20:09:44.070410 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:44 crc kubenswrapper[4858]: I0127 20:09:44.070526 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:44 crc kubenswrapper[4858]: E0127 20:09:44.070710 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:44 crc kubenswrapper[4858]: E0127 20:09:44.070820 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:45 crc kubenswrapper[4858]: I0127 20:09:45.070240 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:45 crc kubenswrapper[4858]: E0127 20:09:45.070389 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:46 crc kubenswrapper[4858]: I0127 20:09:46.070494 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:46 crc kubenswrapper[4858]: I0127 20:09:46.070510 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:46 crc kubenswrapper[4858]: I0127 20:09:46.070652 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:46 crc kubenswrapper[4858]: E0127 20:09:46.071491 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:46 crc kubenswrapper[4858]: E0127 20:09:46.071761 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:46 crc kubenswrapper[4858]: E0127 20:09:46.071893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:46 crc kubenswrapper[4858]: E0127 20:09:46.170377 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 20:09:47 crc kubenswrapper[4858]: I0127 20:09:47.070516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:47 crc kubenswrapper[4858]: E0127 20:09:47.070656 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:48 crc kubenswrapper[4858]: I0127 20:09:48.070761 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:48 crc kubenswrapper[4858]: I0127 20:09:48.070770 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:48 crc kubenswrapper[4858]: I0127 20:09:48.070791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:48 crc kubenswrapper[4858]: E0127 20:09:48.071099 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:48 crc kubenswrapper[4858]: E0127 20:09:48.071224 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:48 crc kubenswrapper[4858]: E0127 20:09:48.071253 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:49 crc kubenswrapper[4858]: I0127 20:09:49.070260 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:49 crc kubenswrapper[4858]: E0127 20:09:49.070401 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:50 crc kubenswrapper[4858]: I0127 20:09:50.070356 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:50 crc kubenswrapper[4858]: E0127 20:09:50.070495 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:50 crc kubenswrapper[4858]: I0127 20:09:50.070356 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:50 crc kubenswrapper[4858]: I0127 20:09:50.070526 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:50 crc kubenswrapper[4858]: E0127 20:09:50.070663 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:50 crc kubenswrapper[4858]: E0127 20:09:50.070763 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:51 crc kubenswrapper[4858]: I0127 20:09:51.070479 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:51 crc kubenswrapper[4858]: E0127 20:09:51.070833 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:51 crc kubenswrapper[4858]: I0127 20:09:51.071091 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:09:51 crc kubenswrapper[4858]: E0127 20:09:51.071235 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rsk7j_openshift-ovn-kubernetes(5cda3ac1-7db7-4215-a301-b757743bff59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" Jan 27 20:09:51 crc kubenswrapper[4858]: E0127 20:09:51.171895 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 20:09:52 crc kubenswrapper[4858]: I0127 20:09:52.070008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:52 crc kubenswrapper[4858]: I0127 20:09:52.070078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:52 crc kubenswrapper[4858]: E0127 20:09:52.070271 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:52 crc kubenswrapper[4858]: I0127 20:09:52.070078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:52 crc kubenswrapper[4858]: E0127 20:09:52.070479 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:52 crc kubenswrapper[4858]: E0127 20:09:52.070577 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:53 crc kubenswrapper[4858]: I0127 20:09:53.070443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:53 crc kubenswrapper[4858]: E0127 20:09:53.070793 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:54 crc kubenswrapper[4858]: I0127 20:09:54.070421 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:54 crc kubenswrapper[4858]: I0127 20:09:54.070620 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:54 crc kubenswrapper[4858]: E0127 20:09:54.070759 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:54 crc kubenswrapper[4858]: I0127 20:09:54.070842 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:54 crc kubenswrapper[4858]: E0127 20:09:54.071041 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:54 crc kubenswrapper[4858]: E0127 20:09:54.071191 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:55 crc kubenswrapper[4858]: I0127 20:09:55.070604 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:55 crc kubenswrapper[4858]: E0127 20:09:55.070901 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:56 crc kubenswrapper[4858]: I0127 20:09:56.070880 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:56 crc kubenswrapper[4858]: E0127 20:09:56.073120 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:56 crc kubenswrapper[4858]: I0127 20:09:56.073208 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:56 crc kubenswrapper[4858]: E0127 20:09:56.073395 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:56 crc kubenswrapper[4858]: I0127 20:09:56.073722 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:56 crc kubenswrapper[4858]: E0127 20:09:56.073864 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:56 crc kubenswrapper[4858]: E0127 20:09:56.172514 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 20:09:57 crc kubenswrapper[4858]: I0127 20:09:57.070015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:57 crc kubenswrapper[4858]: E0127 20:09:57.070157 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:09:57 crc kubenswrapper[4858]: I0127 20:09:57.070282 4858 scope.go:117] "RemoveContainer" containerID="57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f" Jan 27 20:09:57 crc kubenswrapper[4858]: I0127 20:09:57.767472 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/1.log" Jan 27 20:09:57 crc kubenswrapper[4858]: I0127 20:09:57.768022 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerStarted","Data":"7b84079c817a81c05a19043435704e8a5fda3cbe2f61372f38f3fe837f08fdf2"} Jan 27 20:09:58 crc kubenswrapper[4858]: I0127 20:09:58.070192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:09:58 crc kubenswrapper[4858]: I0127 20:09:58.070256 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:09:58 crc kubenswrapper[4858]: I0127 20:09:58.070221 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:09:58 crc kubenswrapper[4858]: E0127 20:09:58.070393 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:09:58 crc kubenswrapper[4858]: E0127 20:09:58.070614 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:09:58 crc kubenswrapper[4858]: E0127 20:09:58.070687 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:09:59 crc kubenswrapper[4858]: I0127 20:09:59.070875 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:09:59 crc kubenswrapper[4858]: E0127 20:09:59.071022 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:10:00 crc kubenswrapper[4858]: I0127 20:10:00.070949 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:00 crc kubenswrapper[4858]: I0127 20:10:00.070949 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:00 crc kubenswrapper[4858]: E0127 20:10:00.071169 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:10:00 crc kubenswrapper[4858]: E0127 20:10:00.071227 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:10:00 crc kubenswrapper[4858]: I0127 20:10:00.070979 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:00 crc kubenswrapper[4858]: E0127 20:10:00.071316 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:10:01 crc kubenswrapper[4858]: I0127 20:10:01.070664 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:01 crc kubenswrapper[4858]: E0127 20:10:01.070914 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:10:01 crc kubenswrapper[4858]: E0127 20:10:01.175055 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 20:10:02 crc kubenswrapper[4858]: I0127 20:10:02.070196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:02 crc kubenswrapper[4858]: I0127 20:10:02.070336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:02 crc kubenswrapper[4858]: E0127 20:10:02.070413 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:10:02 crc kubenswrapper[4858]: E0127 20:10:02.070496 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:10:02 crc kubenswrapper[4858]: I0127 20:10:02.070196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:02 crc kubenswrapper[4858]: E0127 20:10:02.070719 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:10:03 crc kubenswrapper[4858]: I0127 20:10:03.069993 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:03 crc kubenswrapper[4858]: E0127 20:10:03.070177 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:10:04 crc kubenswrapper[4858]: I0127 20:10:04.071007 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:04 crc kubenswrapper[4858]: I0127 20:10:04.071013 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:04 crc kubenswrapper[4858]: I0127 20:10:04.071160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:04 crc kubenswrapper[4858]: E0127 20:10:04.071271 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:10:04 crc kubenswrapper[4858]: E0127 20:10:04.071157 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:10:04 crc kubenswrapper[4858]: E0127 20:10:04.071427 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:10:05 crc kubenswrapper[4858]: I0127 20:10:05.070933 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:05 crc kubenswrapper[4858]: E0127 20:10:05.071058 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:10:05 crc kubenswrapper[4858]: I0127 20:10:05.941435 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:05 crc kubenswrapper[4858]: E0127 20:10:05.941765 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:12:07.941717467 +0000 UTC m=+272.649533173 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:05 crc kubenswrapper[4858]: I0127 20:10:05.941860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:05 crc kubenswrapper[4858]: I0127 20:10:05.941961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:05 crc kubenswrapper[4858]: E0127 20:10:05.942090 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:10:05 crc kubenswrapper[4858]: E0127 20:10:05.942121 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:10:05 crc kubenswrapper[4858]: E0127 20:10:05.942206 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:12:07.94217734 +0000 UTC m=+272.649993066 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 20:10:05 crc kubenswrapper[4858]: E0127 20:10:05.942244 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:12:07.942231742 +0000 UTC m=+272.650047468 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.043512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.043680 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.043812 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.043862 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.043882 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.043914 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.043940 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.043957 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.043960 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:12:08.043936343 +0000 UTC m=+272.751752239 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.044048 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:12:08.044025376 +0000 UTC m=+272.751841092 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.070488 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.070579 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.071532 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.071640 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.071989 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.075616 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.075795 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:10:06 crc kubenswrapper[4858]: E0127 20:10:06.175673 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.800796 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/3.log" Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.803459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerStarted","Data":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.803871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:10:06 crc kubenswrapper[4858]: I0127 20:10:06.831396 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podStartSLOduration=122.83137996 podStartE2EDuration="2m2.83137996s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:06.830252257 +0000 UTC m=+151.538067973" watchObservedRunningTime="2026-01-27 20:10:06.83137996 +0000 UTC m=+151.539195666" Jan 27 20:10:07 crc kubenswrapper[4858]: I0127 20:10:07.017492 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j5hlm"] Jan 27 20:10:07 crc kubenswrapper[4858]: I0127 20:10:07.017736 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:07 crc kubenswrapper[4858]: E0127 20:10:07.017881 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:10:07 crc kubenswrapper[4858]: I0127 20:10:07.070245 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:07 crc kubenswrapper[4858]: E0127 20:10:07.070442 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:10:08 crc kubenswrapper[4858]: I0127 20:10:08.070702 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:08 crc kubenswrapper[4858]: E0127 20:10:08.071074 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:10:08 crc kubenswrapper[4858]: I0127 20:10:08.070845 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:08 crc kubenswrapper[4858]: I0127 20:10:08.070834 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:08 crc kubenswrapper[4858]: E0127 20:10:08.071155 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:10:08 crc kubenswrapper[4858]: E0127 20:10:08.071434 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:10:09 crc kubenswrapper[4858]: I0127 20:10:09.070798 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:09 crc kubenswrapper[4858]: E0127 20:10:09.071016 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:10:10 crc kubenswrapper[4858]: I0127 20:10:10.069973 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:10 crc kubenswrapper[4858]: I0127 20:10:10.070134 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:10 crc kubenswrapper[4858]: I0127 20:10:10.070289 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:10 crc kubenswrapper[4858]: E0127 20:10:10.070380 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j5hlm" podUID="3fa7e9cb-b195-401a-b57c-bdb47f36ffb8" Jan 27 20:10:10 crc kubenswrapper[4858]: E0127 20:10:10.070156 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:10:10 crc kubenswrapper[4858]: E0127 20:10:10.070535 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:10:11 crc kubenswrapper[4858]: I0127 20:10:11.070015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:11 crc kubenswrapper[4858]: E0127 20:10:11.070247 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.070299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.070318 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.070728 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.073141 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.073296 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.073644 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.074331 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.074893 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 20:10:12 crc kubenswrapper[4858]: I0127 20:10:12.075151 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 20:10:13 crc kubenswrapper[4858]: I0127 20:10:13.070154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.690000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.734967 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mqblw"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.736863 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f4w6\" (UniqueName: \"kubernetes.io/projected/f20c3023-909c-4904-b65a-f4627bf28119-kube-api-access-8f4w6\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740291 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740360 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f20c3023-909c-4904-b65a-f4627bf28119-images\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740365 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f20c3023-909c-4904-b65a-f4627bf28119-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740444 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f20c3023-909c-4904-b65a-f4627bf28119-config\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.740769 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.741157 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.743664 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.744195 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.744408 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dvbh6"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.745065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.747431 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9nxt8"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.748433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.750686 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xp2mw"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.751734 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.756421 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.757423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.758244 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xpxs8"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.758718 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.766127 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.767211 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.774019 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.774407 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.775791 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.776736 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.777546 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.777716 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.777864 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.777914 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.778158 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.778357 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.778501 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.778847 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.779395 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.779590 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.779981 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.781421 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.783784 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.784076 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.786912 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.787497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.787918 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.787968 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.788440 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.788824 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.789218 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.791537 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.791683 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.799961 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.800592 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.801633 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.801770 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.801914 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.802992 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.803284 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.804157 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8p6rb"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.804493 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mzt2r"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.804798 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rsc77"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.805238 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.805358 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.806879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.812878 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.813400 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.813468 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.813675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.814001 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.814032 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.814062 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.814427 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.815241 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.815423 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.815541 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.815770 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.815865 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-68tdw"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.815920 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.818946 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.819603 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.820727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.823418 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.823639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.823672 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.825269 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtjt"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.825766 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.828597 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.829082 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.829151 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.829589 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.829713 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.830008 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.830154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.830261 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.830354 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.830461 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.830796 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.830911 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831003 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831099 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831204 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831302 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831492 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831692 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831719 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831867 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831879 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831908 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.831987 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.832028 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.832114 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.832122 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.832184 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.832250 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.837960 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.838570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.838735 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.839527 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.841020 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.841751 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.842847 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.842878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.842897 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-image-import-ca\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.842911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-config\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.842941 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8dkm\" (UniqueName: \"kubernetes.io/projected/bb41d7df-dacd-41b0-8399-63ddcee318f6-kube-api-access-v8dkm\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.842964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-etcd-service-ca\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.842982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/748214da-1856-4f93-82c5-34403ec46118-etcd-client\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843014 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57f98\" (UniqueName: \"kubernetes.io/projected/6d09d17a-ebf7-49c1-ae11-17808115b60c-kube-api-access-57f98\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a18bc75b-fceb-4545-8a48-3296b1ce8f5c-metrics-tls\") pod \"dns-operator-744455d44c-9nxt8\" (UID: \"a18bc75b-fceb-4545-8a48-3296b1ce8f5c\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843062 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748214da-1856-4f93-82c5-34403ec46118-serving-cert\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-serving-cert\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f20c3023-909c-4904-b65a-f4627bf28119-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843193 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-config\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxf2z\" (UniqueName: \"kubernetes.io/projected/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-kube-api-access-bxf2z\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-etcd-ca\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843249 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdkmh\" (UniqueName: \"kubernetes.io/projected/1520e31e-c4b3-4df3-a8cc-db7b0daf491f-kube-api-access-fdkmh\") pod \"downloads-7954f5f757-xpxs8\" (UID: \"1520e31e-c4b3-4df3-a8cc-db7b0daf491f\") " pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843312 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsb98\" (UniqueName: \"kubernetes.io/projected/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-kube-api-access-bsb98\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843354 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843371 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-auth-proxy-config\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843459 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-serving-cert\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktk6z\" (UniqueName: \"kubernetes.io/projected/cbe6b979-e2aa-46c6-b4b0-67464630cddf-kube-api-access-ktk6z\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e38452d4-9405-466f-99cf-0706d9ca1c4f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-etcd-client\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zglst\" (UniqueName: \"kubernetes.io/projected/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-kube-api-access-zglst\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843607 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843628 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-serving-cert\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.843671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64347cf6-4666-4346-b8b3-58300fa9c0c6-config\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.845699 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-encryption-config\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.845732 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784da09b-9380-4388-9121-210d8ee8f5a6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.845752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.845776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f20c3023-909c-4904-b65a-f4627bf28119-images\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.845792 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-serving-cert\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.845819 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-encryption-config\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856353 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-stats-auth\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856387 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e38452d4-9405-466f-99cf-0706d9ca1c4f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-etcd-serving-ca\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb41d7df-dacd-41b0-8399-63ddcee318f6-serving-cert\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856461 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-client-ca\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hlkt\" (UniqueName: \"kubernetes.io/projected/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-kube-api-access-4hlkt\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d09d17a-ebf7-49c1-ae11-17808115b60c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856528 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-config\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe6b979-e2aa-46c6-b4b0-67464630cddf-audit-dir\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856583 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-metrics-certs\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnrj\" (UniqueName: \"kubernetes.io/projected/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-kube-api-access-9xnrj\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/01f33b82-5877-4c9d-ba44-3c6676c5f41d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c6zzp\" (UID: \"01f33b82-5877-4c9d-ba44-3c6676c5f41d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856649 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4r2f\" (UniqueName: \"kubernetes.io/projected/784da09b-9380-4388-9121-210d8ee8f5a6-kube-api-access-s4r2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856669 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-etcd-client\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-dir\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d09d17a-ebf7-49c1-ae11-17808115b60c-proxy-tls\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ksxg\" (UniqueName: \"kubernetes.io/projected/e38452d4-9405-466f-99cf-0706d9ca1c4f-kube-api-access-9ksxg\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-audit-policies\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64347cf6-4666-4346-b8b3-58300fa9c0c6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d09d17a-ebf7-49c1-ae11-17808115b60c-images\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856870 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhmd\" (UniqueName: \"kubernetes.io/projected/a18bc75b-fceb-4545-8a48-3296b1ce8f5c-kube-api-access-qqhmd\") pod \"dns-operator-744455d44c-9nxt8\" (UID: \"a18bc75b-fceb-4545-8a48-3296b1ce8f5c\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-client-ca\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64347cf6-4666-4346-b8b3-58300fa9c0c6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn95n\" (UniqueName: \"kubernetes.io/projected/20d50172-b3f8-431b-962c-14a22d356995-kube-api-access-fn95n\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856946 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784da09b-9380-4388-9121-210d8ee8f5a6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856962 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54w67\" (UniqueName: \"kubernetes.io/projected/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-kube-api-access-54w67\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856980 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c408a00d-4317-45aa-afc3-eacf9e1be32f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-69lnx\" (UID: \"c408a00d-4317-45aa-afc3-eacf9e1be32f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.856999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26sk\" (UniqueName: \"kubernetes.io/projected/c408a00d-4317-45aa-afc3-eacf9e1be32f-kube-api-access-m26sk\") pod \"cluster-samples-operator-665b6dd947-69lnx\" (UID: \"c408a00d-4317-45aa-afc3-eacf9e1be32f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857037 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-default-certificate\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83766314-dad9-48dc-bd66-eea0bea1cefe-service-ca-bundle\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-audit\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-config\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f20c3023-909c-4904-b65a-f4627bf28119-config\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfn5c\" (UniqueName: \"kubernetes.io/projected/748214da-1856-4f93-82c5-34403ec46118-kube-api-access-rfn5c\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857216 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbe6b979-e2aa-46c6-b4b0-67464630cddf-node-pullsecrets\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20d50172-b3f8-431b-962c-14a22d356995-audit-dir\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f4w6\" (UniqueName: \"kubernetes.io/projected/f20c3023-909c-4904-b65a-f4627bf28119-kube-api-access-8f4w6\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857292 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527rv\" (UniqueName: \"kubernetes.io/projected/01f33b82-5877-4c9d-ba44-3c6676c5f41d-kube-api-access-527rv\") pod \"control-plane-machine-set-operator-78cbb6b69f-c6zzp\" (UID: \"01f33b82-5877-4c9d-ba44-3c6676c5f41d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857311 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-policies\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/83766314-dad9-48dc-bd66-eea0bea1cefe-kube-api-access-g9mm9\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857346 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-machine-approver-tls\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857365 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-config\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-config\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857400 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.857418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-config\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.860675 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.861685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f20c3023-909c-4904-b65a-f4627bf28119-images\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.862063 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f20c3023-909c-4904-b65a-f4627bf28119-config\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.862259 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.862953 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.863121 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.863304 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.863377 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8tr47"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.865132 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.865796 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.866498 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.873687 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.876292 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.876805 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.877320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.878845 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rkkqh"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.879808 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.880108 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.881034 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.891152 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.892639 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f20c3023-909c-4904-b65a-f4627bf28119-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.893974 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894146 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894482 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894513 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894529 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894645 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894706 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894792 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894938 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894943 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.896137 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.894908 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.904843 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.905497 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.906169 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.906398 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.906618 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.908073 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nrls7"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.908607 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.910878 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w27fl"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.912404 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.912777 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.912994 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.913336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.914343 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.914803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.914917 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-p72qt"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.915744 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.917337 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.918737 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.919631 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.920152 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.920238 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.920303 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.921005 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.921330 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.921832 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.921917 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.922093 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.922645 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.923814 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.924449 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.924686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.925769 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.926371 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.926462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.926656 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.926913 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.927893 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dvbh6"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.928849 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.930257 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xpxs8"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.931387 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mqblw"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.932666 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9nxt8"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.933733 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.935255 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rl4lk"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.936951 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.937073 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.938345 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.938890 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.939577 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8p6rb"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.940835 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rsc77"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.943197 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.944024 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.944712 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mzt2r"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.945758 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.946703 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.947639 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nrls7"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.949204 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xp2mw"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.949903 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtjt"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.957253 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.957337 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rkkqh"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.961381 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.963252 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.966327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-metrics-tls\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.966462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-policies\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.966518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c306264d-7be9-4ec5-a807-a77810848e27-signing-key\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.966598 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-machine-approver-tls\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.966639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.966676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-config\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.966710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/add9b00f-ce10-44d3-ade0-1881523bbefb-trusted-ca\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.967099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.967153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.967189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-apiservice-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.967235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/748214da-1856-4f93-82c5-34403ec46118-etcd-client\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.978917 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-config\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8pfk\" (UniqueName: \"kubernetes.io/projected/add9b00f-ce10-44d3-ade0-1881523bbefb-kube-api-access-m8pfk\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57f98\" (UniqueName: \"kubernetes.io/projected/6d09d17a-ebf7-49c1-ae11-17808115b60c-kube-api-access-57f98\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a18bc75b-fceb-4545-8a48-3296b1ce8f5c-metrics-tls\") pod \"dns-operator-744455d44c-9nxt8\" (UID: \"a18bc75b-fceb-4545-8a48-3296b1ce8f5c\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748214da-1856-4f93-82c5-34403ec46118-serving-cert\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979227 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-serving-cert\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979327 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/df828c47-fa99-4d7f-b3e5-46abda50e131-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdkmh\" (UniqueName: \"kubernetes.io/projected/1520e31e-c4b3-4df3-a8cc-db7b0daf491f-kube-api-access-fdkmh\") pod \"downloads-7954f5f757-xpxs8\" (UID: \"1520e31e-c4b3-4df3-a8cc-db7b0daf491f\") " pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979505 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df828c47-fa99-4d7f-b3e5-46abda50e131-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979575 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-auth-proxy-config\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add9b00f-ce10-44d3-ade0-1881523bbefb-serving-cert\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.979954 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.983872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e38452d4-9405-466f-99cf-0706d9ca1c4f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.983956 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-tmpfs\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.983994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-serving-cert\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64347cf6-4666-4346-b8b3-58300fa9c0c6-config\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lhk\" (UniqueName: \"kubernetes.io/projected/324e5805-141d-4281-8f8d-909b796a36e3-kube-api-access-b5lhk\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df828c47-fa99-4d7f-b3e5-46abda50e131-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984179 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784da09b-9380-4388-9121-210d8ee8f5a6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984201 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-serving-cert\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw2vk\" (UniqueName: \"kubernetes.io/projected/d2fddf7b-44dd-4b24-be5e-385e1792abaf-kube-api-access-tw2vk\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984323 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjjz\" (UniqueName: \"kubernetes.io/projected/df828c47-fa99-4d7f-b3e5-46abda50e131-kube-api-access-8tjjz\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-etcd-serving-ca\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984384 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-client-ca\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hlkt\" (UniqueName: \"kubernetes.io/projected/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-kube-api-access-4hlkt\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe6b979-e2aa-46c6-b4b0-67464630cddf-audit-dir\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/01f33b82-5877-4c9d-ba44-3c6676c5f41d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c6zzp\" (UID: \"01f33b82-5877-4c9d-ba44-3c6676c5f41d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984494 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xnrj\" (UniqueName: \"kubernetes.io/projected/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-kube-api-access-9xnrj\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ksxg\" (UniqueName: \"kubernetes.io/projected/e38452d4-9405-466f-99cf-0706d9ca1c4f-kube-api-access-9ksxg\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d09d17a-ebf7-49c1-ae11-17808115b60c-proxy-tls\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64347cf6-4666-4346-b8b3-58300fa9c0c6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d09d17a-ebf7-49c1-ae11-17808115b60c-images\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984816 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhmd\" (UniqueName: \"kubernetes.io/projected/a18bc75b-fceb-4545-8a48-3296b1ce8f5c-kube-api-access-qqhmd\") pod \"dns-operator-744455d44c-9nxt8\" (UID: \"a18bc75b-fceb-4545-8a48-3296b1ce8f5c\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984870 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmwn\" (UniqueName: \"kubernetes.io/projected/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-kube-api-access-vlmwn\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn95n\" (UniqueName: \"kubernetes.io/projected/20d50172-b3f8-431b-962c-14a22d356995-kube-api-access-fn95n\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-trusted-ca\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.984979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784da09b-9380-4388-9121-210d8ee8f5a6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54w67\" (UniqueName: \"kubernetes.io/projected/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-kube-api-access-54w67\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985065 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26sk\" (UniqueName: \"kubernetes.io/projected/c408a00d-4317-45aa-afc3-eacf9e1be32f-kube-api-access-m26sk\") pod \"cluster-samples-operator-665b6dd947-69lnx\" (UID: \"c408a00d-4317-45aa-afc3-eacf9e1be32f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985130 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-default-certificate\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985299 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add9b00f-ce10-44d3-ade0-1881523bbefb-config\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-audit\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g44f2\" (UniqueName: \"kubernetes.io/projected/c306264d-7be9-4ec5-a807-a77810848e27-kube-api-access-g44f2\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20d50172-b3f8-431b-962c-14a22d356995-audit-dir\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985490 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fddf7b-44dd-4b24-be5e-385e1792abaf-config\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-527rv\" (UniqueName: \"kubernetes.io/projected/01f33b82-5877-4c9d-ba44-3c6676c5f41d-kube-api-access-527rv\") pod \"control-plane-machine-set-operator-78cbb6b69f-c6zzp\" (UID: \"01f33b82-5877-4c9d-ba44-3c6676c5f41d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-config\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985683 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/83766314-dad9-48dc-bd66-eea0bea1cefe-kube-api-access-g9mm9\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-config\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-etcd-service-ca\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-image-import-ca\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985875 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8dkm\" (UniqueName: \"kubernetes.io/projected/bb41d7df-dacd-41b0-8399-63ddcee318f6-kube-api-access-v8dkm\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.985962 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtv6n\" (UniqueName: \"kubernetes.io/projected/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-kube-api-access-mtv6n\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-config\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxf2z\" (UniqueName: \"kubernetes.io/projected/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-kube-api-access-bxf2z\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-etcd-ca\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986196 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsb98\" (UniqueName: \"kubernetes.io/projected/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-kube-api-access-bsb98\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986282 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c306264d-7be9-4ec5-a807-a77810848e27-signing-cabundle\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f73f2a-838f-49aa-81ab-1f5ab6de718a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-serving-cert\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skljw\" (UniqueName: \"kubernetes.io/projected/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-kube-api-access-skljw\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986562 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktk6z\" (UniqueName: \"kubernetes.io/projected/cbe6b979-e2aa-46c6-b4b0-67464630cddf-kube-api-access-ktk6z\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986644 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-etcd-client\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zglst\" (UniqueName: \"kubernetes.io/projected/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-kube-api-access-zglst\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986736 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-encryption-config\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-encryption-config\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e38452d4-9405-466f-99cf-0706d9ca1c4f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986875 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-stats-auth\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f73f2a-838f-49aa-81ab-1f5ab6de718a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.986977 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb41d7df-dacd-41b0-8399-63ddcee318f6-serving-cert\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/324e5805-141d-4281-8f8d-909b796a36e3-metrics-tls\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987065 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-metrics-certs\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d09d17a-ebf7-49c1-ae11-17808115b60c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987158 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-config\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4r2f\" (UniqueName: \"kubernetes.io/projected/784da09b-9380-4388-9121-210d8ee8f5a6-kube-api-access-s4r2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-dir\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987301 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-etcd-client\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987331 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-audit-policies\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-client-ca\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64347cf6-4666-4346-b8b3-58300fa9c0c6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55f73f2a-838f-49aa-81ab-1f5ab6de718a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987885 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c408a00d-4317-45aa-afc3-eacf9e1be32f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-69lnx\" (UID: \"c408a00d-4317-45aa-afc3-eacf9e1be32f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987909 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-srv-cert\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83766314-dad9-48dc-bd66-eea0bea1cefe-service-ca-bundle\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.987972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-config\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.988001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-webhook-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.988029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfn5c\" (UniqueName: \"kubernetes.io/projected/748214da-1856-4f93-82c5-34403ec46118-kube-api-access-rfn5c\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.988059 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fddf7b-44dd-4b24-be5e-385e1792abaf-serving-cert\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.988089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbe6b979-e2aa-46c6-b4b0-67464630cddf-node-pullsecrets\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.988118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324e5805-141d-4281-8f8d-909b796a36e3-config-volume\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.992199 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.992264 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr"] Jan 27 20:10:17 crc kubenswrapper[4858]: I0127 20:10:17.992292 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.007155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-policies\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.007988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-config\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.008931 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/748214da-1856-4f93-82c5-34403ec46118-etcd-client\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.009044 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.009865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.010273 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-config\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.010363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.010418 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p72qt"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.011438 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.014162 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.014220 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.021280 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e38452d4-9405-466f-99cf-0706d9ca1c4f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.021622 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784da09b-9380-4388-9121-210d8ee8f5a6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.022100 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.022884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-etcd-service-ca\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.023716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-config\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.024417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-auth-proxy-config\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.026002 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.026100 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.026120 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-etcd-client\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.028312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.033035 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.033765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-machine-approver-tls\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.034335 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.035638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6d09d17a-ebf7-49c1-ae11-17808115b60c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.036841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-dir\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.037120 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/748214da-1856-4f93-82c5-34403ec46118-etcd-ca\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.037093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-config\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.038147 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-image-import-ca\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.038404 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe6b979-e2aa-46c6-b4b0-67464630cddf-audit-dir\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.039052 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-serving-cert\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.039074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-serving-cert\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.039073 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a18bc75b-fceb-4545-8a48-3296b1ce8f5c-metrics-tls\") pod \"dns-operator-744455d44c-9nxt8\" (UID: \"a18bc75b-fceb-4545-8a48-3296b1ce8f5c\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.039179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-encryption-config\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.040029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-client-ca\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.040616 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb41d7df-dacd-41b0-8399-63ddcee318f6-serving-cert\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.040924 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.041315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.048152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-audit\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.048240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20d50172-b3f8-431b-962c-14a22d356995-audit-dir\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.048949 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-config\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.048958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-etcd-client\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.049250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-audit-policies\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.049783 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.049919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.050097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/784da09b-9380-4388-9121-210d8ee8f5a6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.050478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.050624 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.050861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-config\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-config\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe6b979-e2aa-46c6-b4b0-67464630cddf-etcd-serving-ca\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051412 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbe6b979-e2aa-46c6-b4b0-67464630cddf-node-pullsecrets\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-client-ca\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051855 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.051932 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d50172-b3f8-431b-962c-14a22d356995-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.052388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.052418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e38452d4-9405-466f-99cf-0706d9ca1c4f-serving-cert\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.052500 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-metrics-certs\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.054083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-encryption-config\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.054232 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe6b979-e2aa-46c6-b4b0-67464630cddf-serving-cert\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.054534 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.054819 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.054850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.055126 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d50172-b3f8-431b-962c-14a22d356995-serving-cert\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.055380 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w27fl"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.056785 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.057258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c408a00d-4317-45aa-afc3-eacf9e1be32f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-69lnx\" (UID: \"c408a00d-4317-45aa-afc3-eacf9e1be32f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.057316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83766314-dad9-48dc-bd66-eea0bea1cefe-service-ca-bundle\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.057858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/748214da-1856-4f93-82c5-34403ec46118-serving-cert\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.059305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-stats-auth\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.059848 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.060364 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.061489 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-g86cr"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.062497 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.062883 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rl4lk"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.063879 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5mx8r"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.065289 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8tr47"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.065397 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.066201 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.071074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/83766314-dad9-48dc-bd66-eea0bea1cefe-default-certificate\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.078756 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.078817 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.078831 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.078844 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.078855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5mx8r"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.078867 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fzjn9"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.079540 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.079585 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fzjn9"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.079674 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-webhook-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fddf7b-44dd-4b24-be5e-385e1792abaf-serving-cert\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324e5805-141d-4281-8f8d-909b796a36e3-config-volume\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-metrics-tls\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c306264d-7be9-4ec5-a807-a77810848e27-signing-key\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089785 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/add9b00f-ce10-44d3-ade0-1881523bbefb-trusted-ca\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089816 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-apiservice-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8pfk\" (UniqueName: \"kubernetes.io/projected/add9b00f-ce10-44d3-ade0-1881523bbefb-kube-api-access-m8pfk\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.089975 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/df828c47-fa99-4d7f-b3e5-46abda50e131-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df828c47-fa99-4d7f-b3e5-46abda50e131-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090063 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add9b00f-ce10-44d3-ade0-1881523bbefb-serving-cert\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090134 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-tmpfs\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lhk\" (UniqueName: \"kubernetes.io/projected/324e5805-141d-4281-8f8d-909b796a36e3-kube-api-access-b5lhk\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df828c47-fa99-4d7f-b3e5-46abda50e131-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090212 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw2vk\" (UniqueName: \"kubernetes.io/projected/d2fddf7b-44dd-4b24-be5e-385e1792abaf-kube-api-access-tw2vk\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tjjz\" (UniqueName: \"kubernetes.io/projected/df828c47-fa99-4d7f-b3e5-46abda50e131-kube-api-access-8tjjz\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlmwn\" (UniqueName: \"kubernetes.io/projected/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-kube-api-access-vlmwn\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090350 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-trusted-ca\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090385 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add9b00f-ce10-44d3-ade0-1881523bbefb-config\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090406 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g44f2\" (UniqueName: \"kubernetes.io/projected/c306264d-7be9-4ec5-a807-a77810848e27-kube-api-access-g44f2\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090425 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fddf7b-44dd-4b24-be5e-385e1792abaf-config\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090484 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtv6n\" (UniqueName: \"kubernetes.io/projected/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-kube-api-access-mtv6n\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c306264d-7be9-4ec5-a807-a77810848e27-signing-cabundle\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f73f2a-838f-49aa-81ab-1f5ab6de718a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skljw\" (UniqueName: \"kubernetes.io/projected/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-kube-api-access-skljw\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f73f2a-838f-49aa-81ab-1f5ab6de718a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/324e5805-141d-4281-8f8d-909b796a36e3-metrics-tls\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55f73f2a-838f-49aa-81ab-1f5ab6de718a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-srv-cert\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.090839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-tmpfs\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.092842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64347cf6-4666-4346-b8b3-58300fa9c0c6-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.099963 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.108066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64347cf6-4666-4346-b8b3-58300fa9c0c6-config\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.119521 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.140172 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.152636 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/01f33b82-5877-4c9d-ba44-3c6676c5f41d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c6zzp\" (UID: \"01f33b82-5877-4c9d-ba44-3c6676c5f41d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.163789 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.172016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d09d17a-ebf7-49c1-ae11-17808115b60c-images\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.179850 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.199254 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.211418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d09d17a-ebf7-49c1-ae11-17808115b60c-proxy-tls\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.219185 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.239348 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.259833 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.263237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.279786 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.299762 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.334668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f4w6\" (UniqueName: \"kubernetes.io/projected/f20c3023-909c-4904-b65a-f4627bf28119-kube-api-access-8f4w6\") pod \"machine-api-operator-5694c8668f-mqblw\" (UID: \"f20c3023-909c-4904-b65a-f4627bf28119\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.343705 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.351652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df828c47-fa99-4d7f-b3e5-46abda50e131-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.359381 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.361323 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.379832 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.399622 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.447701 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.459901 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.464824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/add9b00f-ce10-44d3-ade0-1881523bbefb-serving-cert\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.479446 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.500392 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.519209 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.533767 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mqblw"] Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.539677 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 20:10:18 crc kubenswrapper[4858]: W0127 20:10:18.539856 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf20c3023_909c_4904_b65a_f4627bf28119.slice/crio-83f7296fb7de14825d80bd79a63c390ef6527e6bd0dfe90b8f68bc82f78f7418 WatchSource:0}: Error finding container 83f7296fb7de14825d80bd79a63c390ef6527e6bd0dfe90b8f68bc82f78f7418: Status 404 returned error can't find the container with id 83f7296fb7de14825d80bd79a63c390ef6527e6bd0dfe90b8f68bc82f78f7418 Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.541586 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/add9b00f-ce10-44d3-ade0-1881523bbefb-config\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.564181 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.571316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/add9b00f-ce10-44d3-ade0-1881523bbefb-trusted-ca\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.578962 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.599283 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.619229 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.639944 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.655616 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c306264d-7be9-4ec5-a807-a77810848e27-signing-key\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.659532 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.662033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c306264d-7be9-4ec5-a807-a77810848e27-signing-cabundle\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.680324 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.699881 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.719736 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.740249 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.760356 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.779605 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.799752 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.820073 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.833290 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fddf7b-44dd-4b24-be5e-385e1792abaf-serving-cert\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.839513 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.842175 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fddf7b-44dd-4b24-be5e-385e1792abaf-config\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.859096 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.871382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" event={"ID":"f20c3023-909c-4904-b65a-f4627bf28119","Type":"ContainerStarted","Data":"7ea72121b62f36abd55d14421b9733b0aefa4a2517b7c5dfa2a2341dd988e11a"} Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.871453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" event={"ID":"f20c3023-909c-4904-b65a-f4627bf28119","Type":"ContainerStarted","Data":"b2e20f1cbea2548324912c8a22d85342c309bef19d22c663c20c7499b76d00e8"} Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.871469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" event={"ID":"f20c3023-909c-4904-b65a-f4627bf28119","Type":"ContainerStarted","Data":"83f7296fb7de14825d80bd79a63c390ef6527e6bd0dfe90b8f68bc82f78f7418"} Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.879739 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.900011 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.917531 4858 request.go:700] Waited for 1.001564638s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&limit=500&resourceVersion=0 Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.919660 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.939588 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.960115 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 20:10:18 crc kubenswrapper[4858]: I0127 20:10:18.987903 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.000241 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.020051 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.039129 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.048068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/df828c47-fa99-4d7f-b3e5-46abda50e131-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.060126 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.080092 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090716 4858 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090740 4858 secret.go:188] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090784 4858 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090801 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-trusted-ca podName:b4d0e2b3-33dc-497e-94d1-4f728ac62fee nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.59078187 +0000 UTC m=+164.298597576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-trusted-ca") pod "ingress-operator-5b745b69d9-2szkp" (UID: "b4d0e2b3-33dc-497e-94d1-4f728ac62fee") : failed to sync configmap cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090829 4858 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090831 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-metrics-tls podName:b4d0e2b3-33dc-497e-94d1-4f728ac62fee nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590810681 +0000 UTC m=+164.298626387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-metrics-tls") pod "ingress-operator-5b745b69d9-2szkp" (UID: "b4d0e2b3-33dc-497e-94d1-4f728ac62fee") : failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090841 4858 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090853 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55f73f2a-838f-49aa-81ab-1f5ab6de718a-config podName:55f73f2a-838f-49aa-81ab-1f5ab6de718a nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590843952 +0000 UTC m=+164.298659658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/55f73f2a-838f-49aa-81ab-1f5ab6de718a-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" (UID: "55f73f2a-838f-49aa-81ab-1f5ab6de718a") : failed to sync configmap cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090862 4858 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090870 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-webhook-cert podName:d7b71f62-e0b7-4903-bb5a-7c081c83fd29 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590863903 +0000 UTC m=+164.298679599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-webhook-cert") pod "packageserver-d55dfcdfc-shmm4" (UID: "d7b71f62-e0b7-4903-bb5a-7c081c83fd29") : failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090740 4858 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090896 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/324e5805-141d-4281-8f8d-909b796a36e3-config-volume podName:324e5805-141d-4281-8f8d-909b796a36e3 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590877833 +0000 UTC m=+164.298693589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/324e5805-141d-4281-8f8d-909b796a36e3-config-volume") pod "dns-default-rl4lk" (UID: "324e5805-141d-4281-8f8d-909b796a36e3") : failed to sync configmap cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090761 4858 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090917 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-apiservice-cert podName:d7b71f62-e0b7-4903-bb5a-7c081c83fd29 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590909704 +0000 UTC m=+164.298725500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-apiservice-cert") pod "packageserver-d55dfcdfc-shmm4" (UID: "d7b71f62-e0b7-4903-bb5a-7c081c83fd29") : failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090931 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/324e5805-141d-4281-8f8d-909b796a36e3-metrics-tls podName:324e5805-141d-4281-8f8d-909b796a36e3 nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590925705 +0000 UTC m=+164.298741411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/324e5805-141d-4281-8f8d-909b796a36e3-metrics-tls") pod "dns-default-rl4lk" (UID: "324e5805-141d-4281-8f8d-909b796a36e3") : failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090946 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55f73f2a-838f-49aa-81ab-1f5ab6de718a-serving-cert podName:55f73f2a-838f-49aa-81ab-1f5ab6de718a nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590938965 +0000 UTC m=+164.298754671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/55f73f2a-838f-49aa-81ab-1f5ab6de718a-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" (UID: "55f73f2a-838f-49aa-81ab-1f5ab6de718a") : failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090953 4858 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: E0127 20:10:19.090992 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-srv-cert podName:9f472a9b-da89-4553-b0fa-d6c8a2e59cca nodeName:}" failed. No retries permitted until 2026-01-27 20:10:19.590982566 +0000 UTC m=+164.298798272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-srv-cert") pod "olm-operator-6b444d44fb-5vqm9" (UID: "9f472a9b-da89-4553-b0fa-d6c8a2e59cca") : failed to sync secret cache: timed out waiting for the condition Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.099275 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.118988 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.138909 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.160868 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.179708 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.201002 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.220893 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.239484 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.259956 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.280333 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.300701 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.319771 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.339905 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.358810 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.379974 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.400004 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.439887 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.440934 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.460461 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.479288 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.500165 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.520203 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.541075 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.559875 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.580359 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f73f2a-838f-49aa-81ab-1f5ab6de718a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612169 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f73f2a-838f-49aa-81ab-1f5ab6de718a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/324e5805-141d-4281-8f8d-909b796a36e3-metrics-tls\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-srv-cert\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-webhook-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324e5805-141d-4281-8f8d-909b796a36e3-config-volume\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-metrics-tls\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.612577 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-apiservice-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.613025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-trusted-ca\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.613891 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55f73f2a-838f-49aa-81ab-1f5ab6de718a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.614328 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/324e5805-141d-4281-8f8d-909b796a36e3-config-volume\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.615464 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-trusted-ca\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.616429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-metrics-tls\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.618293 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/324e5805-141d-4281-8f8d-909b796a36e3-metrics-tls\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.618852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55f73f2a-838f-49aa-81ab-1f5ab6de718a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.619111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-webhook-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.621280 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-srv-cert\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.622373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-apiservice-cert\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.638413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhmd\" (UniqueName: \"kubernetes.io/projected/a18bc75b-fceb-4545-8a48-3296b1ce8f5c-kube-api-access-qqhmd\") pod \"dns-operator-744455d44c-9nxt8\" (UID: \"a18bc75b-fceb-4545-8a48-3296b1ce8f5c\") " pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.654936 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57f98\" (UniqueName: \"kubernetes.io/projected/6d09d17a-ebf7-49c1-ae11-17808115b60c-kube-api-access-57f98\") pod \"machine-config-operator-74547568cd-fqjkv\" (UID: \"6d09d17a-ebf7-49c1-ae11-17808115b60c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.672709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn95n\" (UniqueName: \"kubernetes.io/projected/20d50172-b3f8-431b-962c-14a22d356995-kube-api-access-fn95n\") pod \"apiserver-7bbb656c7d-242rs\" (UID: \"20d50172-b3f8-431b-962c-14a22d356995\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.692993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktk6z\" (UniqueName: \"kubernetes.io/projected/cbe6b979-e2aa-46c6-b4b0-67464630cddf-kube-api-access-ktk6z\") pod \"apiserver-76f77b778f-xp2mw\" (UID: \"cbe6b979-e2aa-46c6-b4b0-67464630cddf\") " pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.708379 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.716702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8dkm\" (UniqueName: \"kubernetes.io/projected/bb41d7df-dacd-41b0-8399-63ddcee318f6-kube-api-access-v8dkm\") pod \"controller-manager-879f6c89f-dvbh6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.733037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zglst\" (UniqueName: \"kubernetes.io/projected/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-kube-api-access-zglst\") pod \"oauth-openshift-558db77b4-mzt2r\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.753978 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxf2z\" (UniqueName: \"kubernetes.io/projected/178df3f9-bdb4-4e93-bb20-6e201cbf11ee-kube-api-access-bxf2z\") pod \"openshift-apiserver-operator-796bbdcf4f-flcdw\" (UID: \"178df3f9-bdb4-4e93-bb20-6e201cbf11ee\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.768082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.778684 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4r2f\" (UniqueName: \"kubernetes.io/projected/784da09b-9380-4388-9121-210d8ee8f5a6-kube-api-access-s4r2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-thkzl\" (UID: \"784da09b-9380-4388-9121-210d8ee8f5a6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.795526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsb98\" (UniqueName: \"kubernetes.io/projected/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-kube-api-access-bsb98\") pod \"route-controller-manager-6576b87f9c-5rpw8\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.808989 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.814298 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hlkt\" (UniqueName: \"kubernetes.io/projected/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-kube-api-access-4hlkt\") pod \"marketplace-operator-79b997595-5wtjt\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.834867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xnrj\" (UniqueName: \"kubernetes.io/projected/057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b-kube-api-access-9xnrj\") pod \"authentication-operator-69f744f599-8p6rb\" (UID: \"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.856110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ksxg\" (UniqueName: \"kubernetes.io/projected/e38452d4-9405-466f-99cf-0706d9ca1c4f-kube-api-access-9ksxg\") pod \"openshift-config-operator-7777fb866f-qg6xk\" (UID: \"e38452d4-9405-466f-99cf-0706d9ca1c4f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.864061 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xp2mw"] Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.876659 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdkmh\" (UniqueName: \"kubernetes.io/projected/1520e31e-c4b3-4df3-a8cc-db7b0daf491f-kube-api-access-fdkmh\") pod \"downloads-7954f5f757-xpxs8\" (UID: \"1520e31e-c4b3-4df3-a8cc-db7b0daf491f\") " pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:10:19 crc kubenswrapper[4858]: W0127 20:10:19.879298 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe6b979_e2aa_46c6_b4b0_67464630cddf.slice/crio-3b1fe3021be132e4a02387d00268f923a535c812776856bdea3e5ec125039ef4 WatchSource:0}: Error finding container 3b1fe3021be132e4a02387d00268f923a535c812776856bdea3e5ec125039ef4: Status 404 returned error can't find the container with id 3b1fe3021be132e4a02387d00268f923a535c812776856bdea3e5ec125039ef4 Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.893065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.896607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-527rv\" (UniqueName: \"kubernetes.io/projected/01f33b82-5877-4c9d-ba44-3c6676c5f41d-kube-api-access-527rv\") pod \"control-plane-machine-set-operator-78cbb6b69f-c6zzp\" (UID: \"01f33b82-5877-4c9d-ba44-3c6676c5f41d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.900000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.907864 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.916631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9mm9\" (UniqueName: \"kubernetes.io/projected/83766314-dad9-48dc-bd66-eea0bea1cefe-kube-api-access-g9mm9\") pod \"router-default-5444994796-68tdw\" (UID: \"83766314-dad9-48dc-bd66-eea0bea1cefe\") " pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.918256 4858 request.go:700] Waited for 1.868879855s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.922637 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.928199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.932932 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.936131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54w67\" (UniqueName: \"kubernetes.io/projected/a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1-kube-api-access-54w67\") pod \"machine-approver-56656f9798-6gczg\" (UID: \"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.938730 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.955947 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mzt2r"] Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.955953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26sk\" (UniqueName: \"kubernetes.io/projected/c408a00d-4317-45aa-afc3-eacf9e1be32f-kube-api-access-m26sk\") pod \"cluster-samples-operator-665b6dd947-69lnx\" (UID: \"c408a00d-4317-45aa-afc3-eacf9e1be32f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.961177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.975798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfn5c\" (UniqueName: \"kubernetes.io/projected/748214da-1856-4f93-82c5-34403ec46118-kube-api-access-rfn5c\") pod \"etcd-operator-b45778765-rsc77\" (UID: \"748214da-1856-4f93-82c5-34403ec46118\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.993376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/64347cf6-4666-4346-b8b3-58300fa9c0c6-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-tpvkr\" (UID: \"64347cf6-4666-4346-b8b3-58300fa9c0c6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:19 crc kubenswrapper[4858]: I0127 20:10:19.999792 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 20:10:20 crc kubenswrapper[4858]: W0127 20:10:20.005317 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37cd25e0_a46b_4f44_a271_2d15a2ac9b07.slice/crio-c64bf76e77fb5f5c47fec98a664706efc8094ce0f63bfd1d512b013958a2c52e WatchSource:0}: Error finding container c64bf76e77fb5f5c47fec98a664706efc8094ce0f63bfd1d512b013958a2c52e: Status 404 returned error can't find the container with id c64bf76e77fb5f5c47fec98a664706efc8094ce0f63bfd1d512b013958a2c52e Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.016813 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.025745 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.027996 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.035958 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.035983 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.039958 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.048799 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.058710 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.059896 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.079895 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.118242 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.119996 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.139130 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.159455 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.179312 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.180952 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.186822 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.202407 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtjt"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.218926 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" Jan 27 20:10:20 crc kubenswrapper[4858]: W0127 20:10:20.239353 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f6cf7fc_5cd0_4b28_992c_41a0e8526f4d.slice/crio-1ee21ff5ab0d50e14ca7eff9307e1854d34764ba8f023df85ff9e0e98357f6de WatchSource:0}: Error finding container 1ee21ff5ab0d50e14ca7eff9307e1854d34764ba8f023df85ff9e0e98357f6de: Status 404 returned error can't find the container with id 1ee21ff5ab0d50e14ca7eff9307e1854d34764ba8f023df85ff9e0e98357f6de Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.239440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8pfk\" (UniqueName: \"kubernetes.io/projected/add9b00f-ce10-44d3-ade0-1881523bbefb-kube-api-access-m8pfk\") pod \"console-operator-58897d9998-rkkqh\" (UID: \"add9b00f-ce10-44d3-ade0-1881523bbefb\") " pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.244478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df828c47-fa99-4d7f-b3e5-46abda50e131-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.256460 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.256705 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.276522 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lhk\" (UniqueName: \"kubernetes.io/projected/324e5805-141d-4281-8f8d-909b796a36e3-kube-api-access-b5lhk\") pod \"dns-default-rl4lk\" (UID: \"324e5805-141d-4281-8f8d-909b796a36e3\") " pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.299889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw2vk\" (UniqueName: \"kubernetes.io/projected/d2fddf7b-44dd-4b24-be5e-385e1792abaf-kube-api-access-tw2vk\") pod \"service-ca-operator-777779d784-fsx7q\" (UID: \"d2fddf7b-44dd-4b24-be5e-385e1792abaf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.303240 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.320384 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlmwn\" (UniqueName: \"kubernetes.io/projected/d7b71f62-e0b7-4903-bb5a-7c081c83fd29-kube-api-access-vlmwn\") pod \"packageserver-d55dfcdfc-shmm4\" (UID: \"d7b71f62-e0b7-4903-bb5a-7c081c83fd29\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.351004 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tjjz\" (UniqueName: \"kubernetes.io/projected/df828c47-fa99-4d7f-b3e5-46abda50e131-kube-api-access-8tjjz\") pod \"cluster-image-registry-operator-dc59b4c8b-gp595\" (UID: \"df828c47-fa99-4d7f-b3e5-46abda50e131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.363472 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.377278 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g44f2\" (UniqueName: \"kubernetes.io/projected/c306264d-7be9-4ec5-a807-a77810848e27-kube-api-access-g44f2\") pod \"service-ca-9c57cc56f-nrls7\" (UID: \"c306264d-7be9-4ec5-a807-a77810848e27\") " pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.384377 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtv6n\" (UniqueName: \"kubernetes.io/projected/b4d0e2b3-33dc-497e-94d1-4f728ac62fee-kube-api-access-mtv6n\") pod \"ingress-operator-5b745b69d9-2szkp\" (UID: \"b4d0e2b3-33dc-497e-94d1-4f728ac62fee\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.395511 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.399318 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skljw\" (UniqueName: \"kubernetes.io/projected/9f472a9b-da89-4553-b0fa-d6c8a2e59cca-kube-api-access-skljw\") pod \"olm-operator-6b444d44fb-5vqm9\" (UID: \"9f472a9b-da89-4553-b0fa-d6c8a2e59cca\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.412950 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.418994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55f73f2a-838f-49aa-81ab-1f5ab6de718a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hn4z\" (UID: \"55f73f2a-838f-49aa-81ab-1f5ab6de718a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.512890 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.526098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjjcq\" (UniqueName: \"kubernetes.io/projected/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-kube-api-access-pjjcq\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.526127 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kx8\" (UniqueName: \"kubernetes.io/projected/25775e1a-346e-4b05-ae25-819a5aad12b7-kube-api-access-v4kx8\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w27fl\" (UID: \"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvltj\" (UniqueName: \"kubernetes.io/projected/b3f96545-5f43-4110-8de0-37141493c013-kube-api-access-qvltj\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbjvd\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-kube-api-access-qbjvd\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529278 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-secret-volume\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529295 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805a0505-7eaa-40a2-a363-e986eec80f0e-config\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529329 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-oauth-config\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f998f\" (UniqueName: \"kubernetes.io/projected/0a871399-32f8-4b9b-9118-c7e537864ada-kube-api-access-f998f\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/631986f5-1f28-45ac-8390-c3ac0f3920c0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-tls\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529472 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a871399-32f8-4b9b-9118-c7e537864ada-proxy-tls\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.529494 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3f96545-5f43-4110-8de0-37141493c013-profile-collector-cert\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.531893 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-oauth-serving-cert\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.531952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfhx\" (UniqueName: \"kubernetes.io/projected/1a7f117f-1312-480c-a761-22d6fbf087fe-kube-api-access-fzfhx\") pod \"package-server-manager-789f6589d5-5cfl5\" (UID: \"1a7f117f-1312-480c-a761-22d6fbf087fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.531988 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-bound-sa-token\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532242 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g95s6\" (UniqueName: \"kubernetes.io/projected/d8e80fae-5661-4f89-82bd-c1264c0115dd-kube-api-access-g95s6\") pod \"migrator-59844c95c7-qvzfh\" (UID: \"d8e80fae-5661-4f89-82bd-c1264c0115dd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-certificates\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-config-volume\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3f96545-5f43-4110-8de0-37141493c013-srv-cert\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-trusted-ca\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532644 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/631986f5-1f28-45ac-8390-c3ac0f3920c0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7f117f-1312-480c-a761-22d6fbf087fe-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5cfl5\" (UID: \"1a7f117f-1312-480c-a761-22d6fbf087fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/805a0505-7eaa-40a2-a363-e986eec80f0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jsdr\" (UniqueName: \"kubernetes.io/projected/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-kube-api-access-2jsdr\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-console-config\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532879 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532899 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxxz\" (UniqueName: \"kubernetes.io/projected/ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0-kube-api-access-lsxxz\") pod \"multus-admission-controller-857f4d67dd-w27fl\" (UID: \"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-service-ca\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-trusted-ca-bundle\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.532966 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a871399-32f8-4b9b-9118-c7e537864ada-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: E0127 20:10:20.535438 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.035424228 +0000 UTC m=+165.743239934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.537980 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-serving-cert\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.538103 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/805a0505-7eaa-40a2-a363-e986eec80f0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.544429 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.547242 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.577898 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.629755 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.640667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.640835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjjcq\" (UniqueName: \"kubernetes.io/projected/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-kube-api-access-pjjcq\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.640881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: E0127 20:10:20.640918 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.140890857 +0000 UTC m=+165.848706583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kx8\" (UniqueName: \"kubernetes.io/projected/25775e1a-346e-4b05-ae25-819a5aad12b7-kube-api-access-v4kx8\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w27fl\" (UID: \"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvltj\" (UniqueName: \"kubernetes.io/projected/b3f96545-5f43-4110-8de0-37141493c013-kube-api-access-qvltj\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbjvd\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-kube-api-access-qbjvd\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-secret-volume\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805a0505-7eaa-40a2-a363-e986eec80f0e-config\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-oauth-config\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.641928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f998f\" (UniqueName: \"kubernetes.io/projected/0a871399-32f8-4b9b-9118-c7e537864ada-kube-api-access-f998f\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/631986f5-1f28-45ac-8390-c3ac0f3920c0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642075 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3375f803-5cef-4621-b64c-e3682dd4758c-certs\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-tls\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a871399-32f8-4b9b-9118-c7e537864ada-proxy-tls\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3f96545-5f43-4110-8de0-37141493c013-profile-collector-cert\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-oauth-serving-cert\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642385 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfhx\" (UniqueName: \"kubernetes.io/projected/1a7f117f-1312-480c-a761-22d6fbf087fe-kube-api-access-fzfhx\") pod \"package-server-manager-789f6589d5-5cfl5\" (UID: \"1a7f117f-1312-480c-a761-22d6fbf087fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-bound-sa-token\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642474 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-socket-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3375f803-5cef-4621-b64c-e3682dd4758c-node-bootstrap-token\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g95s6\" (UniqueName: \"kubernetes.io/projected/d8e80fae-5661-4f89-82bd-c1264c0115dd-kube-api-access-g95s6\") pod \"migrator-59844c95c7-qvzfh\" (UID: \"d8e80fae-5661-4f89-82bd-c1264c0115dd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642598 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-certificates\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642678 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-config-volume\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3f96545-5f43-4110-8de0-37141493c013-srv-cert\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/631986f5-1f28-45ac-8390-c3ac0f3920c0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-registration-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642784 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-plugins-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642859 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642898 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-trusted-ca\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/631986f5-1f28-45ac-8390-c3ac0f3920c0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7f117f-1312-480c-a761-22d6fbf087fe-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5cfl5\" (UID: \"1a7f117f-1312-480c-a761-22d6fbf087fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.642980 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0de7aa2b-2206-4637-9950-da5e9e287440-cert\") pod \"ingress-canary-fzjn9\" (UID: \"0de7aa2b-2206-4637-9950-da5e9e287440\") " pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643017 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/805a0505-7eaa-40a2-a363-e986eec80f0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643067 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jsdr\" (UniqueName: \"kubernetes.io/projected/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-kube-api-access-2jsdr\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643133 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-console-config\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxxz\" (UniqueName: \"kubernetes.io/projected/ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0-kube-api-access-lsxxz\") pod \"multus-admission-controller-857f4d67dd-w27fl\" (UID: \"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643318 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-service-ca\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-trusted-ca-bundle\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643367 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a871399-32f8-4b9b-9118-c7e537864ada-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643461 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4d76\" (UniqueName: \"kubernetes.io/projected/904f2c42-7297-4cec-a1ab-c2abd4f5132b-kube-api-access-c4d76\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-csi-data-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643598 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-serving-cert\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643688 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/805a0505-7eaa-40a2-a363-e986eec80f0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j8l4\" (UniqueName: \"kubernetes.io/projected/0de7aa2b-2206-4637-9950-da5e9e287440-kube-api-access-8j8l4\") pod \"ingress-canary-fzjn9\" (UID: \"0de7aa2b-2206-4637-9950-da5e9e287440\") " pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643738 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmwsf\" (UniqueName: \"kubernetes.io/projected/3375f803-5cef-4621-b64c-e3682dd4758c-kube-api-access-zmwsf\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643788 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-mountpoint-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.643812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.645766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w27fl\" (UID: \"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:20 crc kubenswrapper[4858]: E0127 20:10:20.646102 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.146077877 +0000 UTC m=+165.853893773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.648749 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-oauth-config\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.651698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-certificates\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.652710 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/805a0505-7eaa-40a2-a363-e986eec80f0e-config\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.655738 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-tls\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.655982 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-config-volume\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.657818 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-service-ca\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.658148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-console-config\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.658650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3f96545-5f43-4110-8de0-37141493c013-profile-collector-cert\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.659697 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a871399-32f8-4b9b-9118-c7e537864ada-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.659836 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.660282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a871399-32f8-4b9b-9118-c7e537864ada-proxy-tls\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.661417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7f117f-1312-480c-a761-22d6fbf087fe-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5cfl5\" (UID: \"1a7f117f-1312-480c-a761-22d6fbf087fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.664119 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-oauth-serving-cert\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.664203 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-trusted-ca\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.665814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/805a0505-7eaa-40a2-a363-e986eec80f0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.670218 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.670359 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/631986f5-1f28-45ac-8390-c3ac0f3920c0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.671229 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3f96545-5f43-4110-8de0-37141493c013-srv-cert\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.671867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-serving-cert\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.675016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-trusted-ca-bundle\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.675442 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.677091 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-secret-volume\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.677373 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.680938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjjcq\" (UniqueName: \"kubernetes.io/projected/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-kube-api-access-pjjcq\") pod \"collect-profiles-29492400-mnbk5\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.714677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dvbh6"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.737505 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvltj\" (UniqueName: \"kubernetes.io/projected/b3f96545-5f43-4110-8de0-37141493c013-kube-api-access-qvltj\") pod \"catalog-operator-68c6474976-scdgl\" (UID: \"b3f96545-5f43-4110-8de0-37141493c013\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.741310 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0de7aa2b-2206-4637-9950-da5e9e287440-cert\") pod \"ingress-canary-fzjn9\" (UID: \"0de7aa2b-2206-4637-9950-da5e9e287440\") " pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:20 crc kubenswrapper[4858]: E0127 20:10:20.746396 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.246364248 +0000 UTC m=+165.954179954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746503 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4d76\" (UniqueName: \"kubernetes.io/projected/904f2c42-7297-4cec-a1ab-c2abd4f5132b-kube-api-access-c4d76\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746535 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-csi-data-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j8l4\" (UniqueName: \"kubernetes.io/projected/0de7aa2b-2206-4637-9950-da5e9e287440-kube-api-access-8j8l4\") pod \"ingress-canary-fzjn9\" (UID: \"0de7aa2b-2206-4637-9950-da5e9e287440\") " pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746630 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmwsf\" (UniqueName: \"kubernetes.io/projected/3375f803-5cef-4621-b64c-e3682dd4758c-kube-api-access-zmwsf\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-mountpoint-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3375f803-5cef-4621-b64c-e3682dd4758c-certs\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-socket-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.746842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3375f803-5cef-4621-b64c-e3682dd4758c-node-bootstrap-token\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.749883 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-registration-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.749925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-plugins-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.749967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.752259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbjvd\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-kube-api-access-qbjvd\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.754919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kx8\" (UniqueName: \"kubernetes.io/projected/25775e1a-346e-4b05-ae25-819a5aad12b7-kube-api-access-v4kx8\") pod \"console-f9d7485db-p72qt\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.755452 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-csi-data-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.755899 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-socket-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.755951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-plugins-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: E0127 20:10:20.756206 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.256190001 +0000 UTC m=+165.964005697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.756766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-mountpoint-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.756898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/904f2c42-7297-4cec-a1ab-c2abd4f5132b-registration-dir\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.759590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0de7aa2b-2206-4637-9950-da5e9e287440-cert\") pod \"ingress-canary-fzjn9\" (UID: \"0de7aa2b-2206-4637-9950-da5e9e287440\") " pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.763426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f998f\" (UniqueName: \"kubernetes.io/projected/0a871399-32f8-4b9b-9118-c7e537864ada-kube-api-access-f998f\") pod \"machine-config-controller-84d6567774-fjf26\" (UID: \"0a871399-32f8-4b9b-9118-c7e537864ada\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.768418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/3375f803-5cef-4621-b64c-e3682dd4758c-certs\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.784841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/3375f803-5cef-4621-b64c-e3682dd4758c-node-bootstrap-token\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.808426 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9nxt8"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.810841 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8"] Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.812920 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfhx\" (UniqueName: \"kubernetes.io/projected/1a7f117f-1312-480c-a761-22d6fbf087fe-kube-api-access-fzfhx\") pod \"package-server-manager-789f6589d5-5cfl5\" (UID: \"1a7f117f-1312-480c-a761-22d6fbf087fe\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.839914 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.843108 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-bound-sa-token\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.844831 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g95s6\" (UniqueName: \"kubernetes.io/projected/d8e80fae-5661-4f89-82bd-c1264c0115dd-kube-api-access-g95s6\") pod \"migrator-59844c95c7-qvzfh\" (UID: \"d8e80fae-5661-4f89-82bd-c1264c0115dd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.850774 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:20 crc kubenswrapper[4858]: E0127 20:10:20.851204 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.351186739 +0000 UTC m=+166.059002445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.866940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.872519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jsdr\" (UniqueName: \"kubernetes.io/projected/3775c5de-f3e1-4bad-a5f3-8622f85db5ad-kube-api-access-2jsdr\") pod \"openshift-controller-manager-operator-756b6f6bc6-l7rb7\" (UID: \"3775c5de-f3e1-4bad-a5f3-8622f85db5ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.887273 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/805a0505-7eaa-40a2-a363-e986eec80f0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tq895\" (UID: \"805a0505-7eaa-40a2-a363-e986eec80f0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.901895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.908004 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" event={"ID":"01f33b82-5877-4c9d-ba44-3c6676c5f41d","Type":"ContainerStarted","Data":"0d6210e764c810d51618d65ac3bc7c244a67145758645cf3632ad79206edb3ce"} Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.916162 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.919215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxxz\" (UniqueName: \"kubernetes.io/projected/ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0-kube-api-access-lsxxz\") pod \"multus-admission-controller-857f4d67dd-w27fl\" (UID: \"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.924465 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.930780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4d76\" (UniqueName: \"kubernetes.io/projected/904f2c42-7297-4cec-a1ab-c2abd4f5132b-kube-api-access-c4d76\") pod \"csi-hostpathplugin-5mx8r\" (UID: \"904f2c42-7297-4cec-a1ab-c2abd4f5132b\") " pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.939675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.948909 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.949839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmwsf\" (UniqueName: \"kubernetes.io/projected/3375f803-5cef-4621-b64c-e3682dd4758c-kube-api-access-zmwsf\") pod \"machine-config-server-g86cr\" (UID: \"3375f803-5cef-4621-b64c-e3682dd4758c\") " pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.950150 4858 generic.go:334] "Generic (PLEG): container finished" podID="cbe6b979-e2aa-46c6-b4b0-67464630cddf" containerID="99ccbaa816d3919cb29cf7ce7ea61403dd78367db4fa13749885686caf1e7fd7" exitCode=0 Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.950217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" event={"ID":"cbe6b979-e2aa-46c6-b4b0-67464630cddf","Type":"ContainerDied","Data":"99ccbaa816d3919cb29cf7ce7ea61403dd78367db4fa13749885686caf1e7fd7"} Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.950247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" event={"ID":"cbe6b979-e2aa-46c6-b4b0-67464630cddf","Type":"ContainerStarted","Data":"3b1fe3021be132e4a02387d00268f923a535c812776856bdea3e5ec125039ef4"} Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.953356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:20 crc kubenswrapper[4858]: E0127 20:10:20.953693 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.453679933 +0000 UTC m=+166.161495649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.961921 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" event={"ID":"a18bc75b-fceb-4545-8a48-3296b1ce8f5c","Type":"ContainerStarted","Data":"98ad054d15f7e5a8dcedb28bdf9231484f3772d20f092608866f1ec309d27d1e"} Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.964092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" event={"ID":"3f3f573f-78f3-46f9-8db7-c3df5ca093e9","Type":"ContainerStarted","Data":"10539c960846aec2d551b8cb991f4a84182ea0615bb88728806bc4c11b668f52"} Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.968049 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j8l4\" (UniqueName: \"kubernetes.io/projected/0de7aa2b-2206-4637-9950-da5e9e287440-kube-api-access-8j8l4\") pod \"ingress-canary-fzjn9\" (UID: \"0de7aa2b-2206-4637-9950-da5e9e287440\") " pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.972815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" event={"ID":"37cd25e0-a46b-4f44-a271-2d15a2ac9b07","Type":"ContainerStarted","Data":"71de311a25d593c305f43e78f57265bfbc26c9f7875bb94720272b9da8fad2b9"} Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.972866 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" event={"ID":"37cd25e0-a46b-4f44-a271-2d15a2ac9b07","Type":"ContainerStarted","Data":"c64bf76e77fb5f5c47fec98a664706efc8094ce0f63bfd1d512b013958a2c52e"} Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.974000 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.989856 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" Jan 27 20:10:20 crc kubenswrapper[4858]: I0127 20:10:20.997855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" event={"ID":"e38452d4-9405-466f-99cf-0706d9ca1c4f","Type":"ContainerStarted","Data":"c96096652d1ca7f737b5305baf22a6f09bfda8a184324fe2045edcde8474dd3e"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.008465 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8p6rb"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.010023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-g86cr" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.015826 4858 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mzt2r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.015880 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" podUID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.020875 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" event={"ID":"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d","Type":"ContainerStarted","Data":"efb59333516b70197bce8799ad8c5a0a47720e9ba044fff40ce02cf45e14988e"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.020917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" event={"ID":"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d","Type":"ContainerStarted","Data":"1ee21ff5ab0d50e14ca7eff9307e1854d34764ba8f023df85ff9e0e98357f6de"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.022037 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.030153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.036335 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5wtjt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.036376 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.036601 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fzjn9" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.040757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" event={"ID":"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1","Type":"ContainerStarted","Data":"2e1221d8ba8124c1f037aa9ae2db720813cadda81fe67c2bffc80957774dac30"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.044387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-68tdw" event={"ID":"83766314-dad9-48dc-bd66-eea0bea1cefe","Type":"ContainerStarted","Data":"5cc27ebc7b6c4164b0ce6e4463d940dd366dcaa9c3b539b69771563e47f4b4a4"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.044426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-68tdw" event={"ID":"83766314-dad9-48dc-bd66-eea0bea1cefe","Type":"ContainerStarted","Data":"afce37679ab34b3a8e9f94ef99126cbbbc5aba7c7321b109caa10cd47d35b2f7"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.054325 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.056915 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" event={"ID":"20d50172-b3f8-431b-962c-14a22d356995","Type":"ContainerStarted","Data":"f96529e19a8c5f6e5d08a4b59d692f414be595c869a86b1464d3e143003ffa22"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.056972 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" event={"ID":"20d50172-b3f8-431b-962c-14a22d356995","Type":"ContainerStarted","Data":"1b219520c497a1d9fc84d87e7785f82f7daffbdb48c053dd54c1969a855d430c"} Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.058730 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.55869477 +0000 UTC m=+166.266510566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.068814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" event={"ID":"6d09d17a-ebf7-49c1-ae11-17808115b60c","Type":"ContainerStarted","Data":"91e29fb887f8ebf2982defdda284b8ece8ee812a28d2452102abea5a1b0ebd4f"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.068883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" event={"ID":"6d09d17a-ebf7-49c1-ae11-17808115b60c","Type":"ContainerStarted","Data":"4918bdecc8e65d2b8d1dcb9d8e79ff3b8487042b779565bad6297d1f1d569a21"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.070045 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" event={"ID":"bb41d7df-dacd-41b0-8399-63ddcee318f6","Type":"ContainerStarted","Data":"3413fabb5b3c230ef3f484c78fc856c1b1f54993d12809da50610da5c87ba5a5"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.070982 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" event={"ID":"784da09b-9380-4388-9121-210d8ee8f5a6","Type":"ContainerStarted","Data":"f13a0c48344f35a3a1e92c3498abf6b858fe9b7ecd8fb6271fc54d44bd7d4c82"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.076334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" event={"ID":"178df3f9-bdb4-4e93-bb20-6e201cbf11ee","Type":"ContainerStarted","Data":"9618cbf3b5cb7fce2eab7ba20753245e2f89224191d21b32dd034fa2411fdb10"} Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.107287 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xpxs8"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.108988 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.118328 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rsc77"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.149038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.172136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.174106 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.674086726 +0000 UTC m=+166.381902512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.184042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.189967 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.205482 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.205674 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 27 20:10:21 crc kubenswrapper[4858]: W0127 20:10:21.257881 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748214da_1856_4f93_82c5_34403ec46118.slice/crio-3cc81a433ac2e8c3233c9414451a09ab24c65046d43484e15cde109c116a9a9b WatchSource:0}: Error finding container 3cc81a433ac2e8c3233c9414451a09ab24c65046d43484e15cde109c116a9a9b: Status 404 returned error can't find the container with id 3cc81a433ac2e8c3233c9414451a09ab24c65046d43484e15cde109c116a9a9b Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.316170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.317699 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.817637043 +0000 UTC m=+166.525452759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.317754 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.320145 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rkkqh"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.331189 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.345028 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rl4lk"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.360905 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nrls7"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.417780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.418238 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:21.918227022 +0000 UTC m=+166.626042728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: W0127 20:10:21.516776 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7b71f62_e0b7_4903_bb5a_7c081c83fd29.slice/crio-63a0f494d9b3493d319d191b5d3f8840aded0bc4e7fa7924053a401afbbb853a WatchSource:0}: Error finding container 63a0f494d9b3493d319d191b5d3f8840aded0bc4e7fa7924053a401afbbb853a: Status 404 returned error can't find the container with id 63a0f494d9b3493d319d191b5d3f8840aded0bc4e7fa7924053a401afbbb853a Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.519293 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.519719 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.019702277 +0000 UTC m=+166.727517983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.546749 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.563268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.576310 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9"] Jan 27 20:10:21 crc kubenswrapper[4858]: W0127 20:10:21.595657 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadd9b00f_ce10_44d3_ade0_1881523bbefb.slice/crio-354f603a421a9a85b72e7627f1a4a4d05f4c0e5003923d58b582ad65005c1d5d WatchSource:0}: Error finding container 354f603a421a9a85b72e7627f1a4a4d05f4c0e5003923d58b582ad65005c1d5d: Status 404 returned error can't find the container with id 354f603a421a9a85b72e7627f1a4a4d05f4c0e5003923d58b582ad65005c1d5d Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.624881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.626539 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.126522736 +0000 UTC m=+166.834338442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.728132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.728654 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.228636839 +0000 UTC m=+166.936452535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.813320 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.821574 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.829342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.830068 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.330053692 +0000 UTC m=+167.037869388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.830465 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl"] Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.930057 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.930248 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.430220459 +0000 UTC m=+167.138036175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:21 crc kubenswrapper[4858]: I0127 20:10:21.930321 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:21 crc kubenswrapper[4858]: E0127 20:10:21.930908 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.430900609 +0000 UTC m=+167.138716315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.040043 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5"] Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.040148 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.540121077 +0000 UTC m=+167.247936773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.040043 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.042035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.042468 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.542447764 +0000 UTC m=+167.250263520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.050144 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.063053 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.089225 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p72qt"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.089261 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.094125 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-68tdw" podStartSLOduration=138.094100203 podStartE2EDuration="2m18.094100203s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.062914224 +0000 UTC m=+166.770729930" watchObservedRunningTime="2026-01-27 20:10:22.094100203 +0000 UTC m=+166.801915909" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.101953 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" event={"ID":"3f3f573f-78f3-46f9-8db7-c3df5ca093e9","Type":"ContainerStarted","Data":"bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.102999 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.120224 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5rpw8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.120286 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" podUID="3f3f573f-78f3-46f9-8db7-c3df5ca093e9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.134478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xpxs8" event={"ID":"1520e31e-c4b3-4df3-a8cc-db7b0daf491f","Type":"ContainerStarted","Data":"e679fc9d55876bf5591acb69cf021d59e1e8fd92bf17d522b95110672135fd45"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.134535 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xpxs8" event={"ID":"1520e31e-c4b3-4df3-a8cc-db7b0daf491f","Type":"ContainerStarted","Data":"122a83fe388dddc94d43404960a48efe2e70f08b86849f3a08bc74b82153854b"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.135184 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w27fl"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.135528 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.140407 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" event={"ID":"add9b00f-ce10-44d3-ade0-1881523bbefb","Type":"ContainerStarted","Data":"354f603a421a9a85b72e7627f1a4a4d05f4c0e5003923d58b582ad65005c1d5d"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.144027 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.145498 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.645477614 +0000 UTC m=+167.353293330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.147481 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" podStartSLOduration=138.147454851 podStartE2EDuration="2m18.147454851s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.117877248 +0000 UTC m=+166.825692964" watchObservedRunningTime="2026-01-27 20:10:22.147454851 +0000 UTC m=+166.855270557" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.152409 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.152489 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:22 crc kubenswrapper[4858]: W0127 20:10:22.158428 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a7f117f_1312_480c_a761_22d6fbf087fe.slice/crio-9821ca21f7d7e992a810508bbba7ff4cab58d15340648c26cc2b7a49a2c48dce WatchSource:0}: Error finding container 9821ca21f7d7e992a810508bbba7ff4cab58d15340648c26cc2b7a49a2c48dce: Status 404 returned error can't find the container with id 9821ca21f7d7e992a810508bbba7ff4cab58d15340648c26cc2b7a49a2c48dce Jan 27 20:10:22 crc kubenswrapper[4858]: W0127 20:10:22.170054 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e80fae_5661_4f89_82bd_c1264c0115dd.slice/crio-d9a497c8450ef094928158711e9442a803056d002f379d1fe33e68071b1c20eb WatchSource:0}: Error finding container d9a497c8450ef094928158711e9442a803056d002f379d1fe33e68071b1c20eb: Status 404 returned error can't find the container with id d9a497c8450ef094928158711e9442a803056d002f379d1fe33e68071b1c20eb Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.171628 4858 generic.go:334] "Generic (PLEG): container finished" podID="20d50172-b3f8-431b-962c-14a22d356995" containerID="f96529e19a8c5f6e5d08a4b59d692f414be595c869a86b1464d3e143003ffa22" exitCode=0 Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.172278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" event={"ID":"20d50172-b3f8-431b-962c-14a22d356995","Type":"ContainerDied","Data":"f96529e19a8c5f6e5d08a4b59d692f414be595c869a86b1464d3e143003ffa22"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.172321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" event={"ID":"20d50172-b3f8-431b-962c-14a22d356995","Type":"ContainerStarted","Data":"6ef27deaf9f07067a8a20a22e9f2f8e9a76e6fa9f9adcea270921e1f3a50fd12"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.179258 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5mx8r"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.182413 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fzjn9"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.184329 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" event={"ID":"0a871399-32f8-4b9b-9118-c7e537864ada","Type":"ContainerStarted","Data":"7e88b4927e8a971fe88abd68355ace17b5262bbecd78cc711441de6d8642a095"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.186149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" event={"ID":"b4d0e2b3-33dc-497e-94d1-4f728ac62fee","Type":"ContainerStarted","Data":"60423b0e063695629e6b4f272112ee497371c371f132a90fc587f346874c07b5"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.201613 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mqblw" podStartSLOduration=138.201595471 podStartE2EDuration="2m18.201595471s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.199895152 +0000 UTC m=+166.907710858" watchObservedRunningTime="2026-01-27 20:10:22.201595471 +0000 UTC m=+166.909411177" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.206606 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:22 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:22 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:22 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.206652 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.214937 4858 generic.go:334] "Generic (PLEG): container finished" podID="e38452d4-9405-466f-99cf-0706d9ca1c4f" containerID="02c871e4a5e5b8d0f21253b3f0a1c574ba1e61d6443e947f02fa2d039eb15a73" exitCode=0 Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.215048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" event={"ID":"e38452d4-9405-466f-99cf-0706d9ca1c4f","Type":"ContainerDied","Data":"02c871e4a5e5b8d0f21253b3f0a1c574ba1e61d6443e947f02fa2d039eb15a73"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.221632 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895"] Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.245274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" event={"ID":"55f73f2a-838f-49aa-81ab-1f5ab6de718a","Type":"ContainerStarted","Data":"8d5b8f273d95b186877b589ac90c5068ffde5414f5c8b98f5910abdefaee71df"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.247031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.248456 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.748439401 +0000 UTC m=+167.456255187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.251494 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" event={"ID":"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b","Type":"ContainerStarted","Data":"35a3b402758c520fc59080ab0944778f8bfc98f842011104bb472d61a6131c65"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.251661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" event={"ID":"057c5e06-d5a3-4a2f-bf5a-6aa8271b9b8b","Type":"ContainerStarted","Data":"3bdb317a764a03c890ce8a5728062a34827aae1d15fb13b80627f1a277669976"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.256003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" event={"ID":"df828c47-fa99-4d7f-b3e5-46abda50e131","Type":"ContainerStarted","Data":"01a45309c4f1091c23a7038b37e2e15f08a506c790f67d38343ae769f57ecbd3"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.276268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" event={"ID":"bb41d7df-dacd-41b0-8399-63ddcee318f6","Type":"ContainerStarted","Data":"a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.276685 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.291336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" event={"ID":"784da09b-9380-4388-9121-210d8ee8f5a6","Type":"ContainerStarted","Data":"9d027bf1ef6cdc3282d774399bccc4572c5acd73bf0ab4429b308239bba0d50a"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.295090 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dvbh6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.295145 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" podUID="bb41d7df-dacd-41b0-8399-63ddcee318f6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.309043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" event={"ID":"cbe6b979-e2aa-46c6-b4b0-67464630cddf","Type":"ContainerStarted","Data":"297b48f7dbe951ae6b82de1271010795496fe9383feb05d5958a1cabdd7c28fe"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.314926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" event={"ID":"3775c5de-f3e1-4bad-a5f3-8622f85db5ad","Type":"ContainerStarted","Data":"9ee9f4a2b22b69f94e4c76e757525a9f7216e0c844937e61723a13a0507b11e3"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.317090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" event={"ID":"64347cf6-4666-4346-b8b3-58300fa9c0c6","Type":"ContainerStarted","Data":"08f79e5feb2f9af0171e53c2b673aee5e2e5d920a59730e4f2ff4c90dfbba8dc"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.339790 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" podStartSLOduration=139.339772524 podStartE2EDuration="2m19.339772524s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.30981594 +0000 UTC m=+167.017631666" watchObservedRunningTime="2026-01-27 20:10:22.339772524 +0000 UTC m=+167.047588240" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.347633 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.349055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" event={"ID":"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1","Type":"ContainerStarted","Data":"12284e1c916e912c9dfd570ae7fdc7de8946a73486b2a093cf5bee1e64272f19"} Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.351616 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.851595675 +0000 UTC m=+167.559411421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.368249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" event={"ID":"01f33b82-5877-4c9d-ba44-3c6676c5f41d","Type":"ContainerStarted","Data":"b54b072af7ea3f5aeec8f98e25aec8bca00c51d43e82c22d4b19e3688c92f5d9"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.384504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" event={"ID":"d7b71f62-e0b7-4903-bb5a-7c081c83fd29","Type":"ContainerStarted","Data":"63a0f494d9b3493d319d191b5d3f8840aded0bc4e7fa7924053a401afbbb853a"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.426528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" event={"ID":"d2fddf7b-44dd-4b24-be5e-385e1792abaf","Type":"ContainerStarted","Data":"4e2bdcf0431ce99d156963e73ce658b2179851b0cb2f2787a263e5357faf2fc6"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.444473 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rl4lk" event={"ID":"324e5805-141d-4281-8f8d-909b796a36e3","Type":"ContainerStarted","Data":"c5abc4dbe0294f9e3290e147950679beb7f3f222c525d90321be8c289cb3b1ac"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.449543 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.450847 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:22.950833835 +0000 UTC m=+167.658649541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.452895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" event={"ID":"9f472a9b-da89-4553-b0fa-d6c8a2e59cca","Type":"ContainerStarted","Data":"a7511bf36b71ac9e3c22a3b468396bf2791e228ea5e7c8e1459390dcb0300a87"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.463320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" event={"ID":"178df3f9-bdb4-4e93-bb20-6e201cbf11ee","Type":"ContainerStarted","Data":"d91dddb7f8d850a1ee3cba1c69018a987570fc07d7090cdc6d8961981cd2d53a"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.472974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g86cr" event={"ID":"3375f803-5cef-4621-b64c-e3682dd4758c","Type":"ContainerStarted","Data":"0507d08c96202db6ef85a56d39ad65cb8835b0264a502890e43fc1286700bcd9"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.484541 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" event={"ID":"c306264d-7be9-4ec5-a807-a77810848e27","Type":"ContainerStarted","Data":"b584fa8dbc61072b5c89e4de4bdf568ade6838a88a26f38a948ae1b7026ff167"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.486096 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" event={"ID":"b3f96545-5f43-4110-8de0-37141493c013","Type":"ContainerStarted","Data":"ba3bc41c319a0f3642d20b1f257952fcf153bd635c1a2bd006198a3de8a70a90"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.500841 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" event={"ID":"a18bc75b-fceb-4545-8a48-3296b1ce8f5c","Type":"ContainerStarted","Data":"f2ca438447f9e70836cb58efb13758a8af3e1c0aae3b3ca66194dc185f45a7a4"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.505785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" event={"ID":"748214da-1856-4f93-82c5-34403ec46118","Type":"ContainerStarted","Data":"3cc81a433ac2e8c3233c9414451a09ab24c65046d43484e15cde109c116a9a9b"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.542809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" event={"ID":"6d09d17a-ebf7-49c1-ae11-17808115b60c","Type":"ContainerStarted","Data":"e7f41f21fb7c84405273c8b45051886edc4200bd3d74e11a1eebe48e59239626"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.550027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" event={"ID":"c408a00d-4317-45aa-afc3-eacf9e1be32f","Type":"ContainerStarted","Data":"5977051e651f14fc2142618b29b8988ebe459e7002e6971d0e5eb261d0f3dce3"} Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.551202 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5wtjt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.551240 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.551801 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.552175 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.052158175 +0000 UTC m=+167.759973871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.645767 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-g86cr" podStartSLOduration=5.645748093 podStartE2EDuration="5.645748093s" podCreationTimestamp="2026-01-27 20:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.644396104 +0000 UTC m=+167.352211820" watchObservedRunningTime="2026-01-27 20:10:22.645748093 +0000 UTC m=+167.353563799" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.654321 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.659058 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.159037526 +0000 UTC m=+167.866853232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.696958 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" podStartSLOduration=139.696935088 podStartE2EDuration="2m19.696935088s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.695163377 +0000 UTC m=+167.402979083" watchObservedRunningTime="2026-01-27 20:10:22.696935088 +0000 UTC m=+167.404750794" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.697134 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" podStartSLOduration=138.697129674 podStartE2EDuration="2m18.697129674s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.667710866 +0000 UTC m=+167.375526582" watchObservedRunningTime="2026-01-27 20:10:22.697129674 +0000 UTC m=+167.404945380" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.737526 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" podStartSLOduration=138.737503608 podStartE2EDuration="2m18.737503608s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.728623272 +0000 UTC m=+167.436438988" watchObservedRunningTime="2026-01-27 20:10:22.737503608 +0000 UTC m=+167.445319314" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.756185 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.756326 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.256301609 +0000 UTC m=+167.964117315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.756449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.756925 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.256912617 +0000 UTC m=+167.964728323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.778295 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" podStartSLOduration=138.778277833 podStartE2EDuration="2m18.778277833s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.772601909 +0000 UTC m=+167.480417625" watchObservedRunningTime="2026-01-27 20:10:22.778277833 +0000 UTC m=+167.486093539" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.858110 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.858511 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.358494805 +0000 UTC m=+168.066310511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.858313 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8p6rb" podStartSLOduration=139.858278329 podStartE2EDuration="2m19.858278329s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.822539319 +0000 UTC m=+167.530355055" watchObservedRunningTime="2026-01-27 20:10:22.858278329 +0000 UTC m=+167.566094035" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.912732 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-thkzl" podStartSLOduration=138.912708578 podStartE2EDuration="2m18.912708578s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.911806031 +0000 UTC m=+167.619621747" watchObservedRunningTime="2026-01-27 20:10:22.912708578 +0000 UTC m=+167.620524284" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.938008 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" podStartSLOduration=138.937978526 podStartE2EDuration="2m18.937978526s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.934947518 +0000 UTC m=+167.642763224" watchObservedRunningTime="2026-01-27 20:10:22.937978526 +0000 UTC m=+167.645794232" Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.959932 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:22 crc kubenswrapper[4858]: E0127 20:10:22.960416 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.460387902 +0000 UTC m=+168.168203598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:22 crc kubenswrapper[4858]: I0127 20:10:22.994814 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c6zzp" podStartSLOduration=138.994792113 podStartE2EDuration="2m18.994792113s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:22.99121326 +0000 UTC m=+167.699028966" watchObservedRunningTime="2026-01-27 20:10:22.994792113 +0000 UTC m=+167.702607819" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.062919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.063282 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.563265917 +0000 UTC m=+168.271081623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.069570 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-flcdw" podStartSLOduration=140.069521147 podStartE2EDuration="2m20.069521147s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.035243879 +0000 UTC m=+167.743059585" watchObservedRunningTime="2026-01-27 20:10:23.069521147 +0000 UTC m=+167.777336853" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.071505 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" podStartSLOduration=139.071496914 podStartE2EDuration="2m19.071496914s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.069032193 +0000 UTC m=+167.776847899" watchObservedRunningTime="2026-01-27 20:10:23.071496914 +0000 UTC m=+167.779312620" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.100310 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" podStartSLOduration=139.100276084 podStartE2EDuration="2m19.100276084s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.100189981 +0000 UTC m=+167.808005707" watchObservedRunningTime="2026-01-27 20:10:23.100276084 +0000 UTC m=+167.808091790" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.137737 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xpxs8" podStartSLOduration=140.137721203 podStartE2EDuration="2m20.137721203s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.135832869 +0000 UTC m=+167.843648575" watchObservedRunningTime="2026-01-27 20:10:23.137721203 +0000 UTC m=+167.845536909" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.165768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.166162 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.666142232 +0000 UTC m=+168.373957928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.176675 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fqjkv" podStartSLOduration=139.176648105 podStartE2EDuration="2m19.176648105s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.175576854 +0000 UTC m=+167.883392580" watchObservedRunningTime="2026-01-27 20:10:23.176648105 +0000 UTC m=+167.884463811" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.193471 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:23 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:23 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:23 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.193598 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.271527 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.271646 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.771629152 +0000 UTC m=+168.479444858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.271759 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.311053 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.811030367 +0000 UTC m=+168.518846073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.388268 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.388784 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.888763468 +0000 UTC m=+168.596579174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.490433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.491135 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:23.991120698 +0000 UTC m=+168.698936404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.550879 4858 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mzt2r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.550951 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" podUID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.592595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.593078 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.093059866 +0000 UTC m=+168.800875572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.651592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6gczg" event={"ID":"a208f5e4-eae1-4aed-bae2-bb0fa8e2b6f1","Type":"ContainerStarted","Data":"687972b61c5b6fe942dd4f33eff5ef1578836c9d99592194bec44ec0cc98e4bc"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.664018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" event={"ID":"c306264d-7be9-4ec5-a807-a77810848e27","Type":"ContainerStarted","Data":"3e41567a8fcf8306a641dc5b199558cb4b4a99936a93e0dd1f14302673088394"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.693957 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.693982 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nrls7" podStartSLOduration=139.693963874 podStartE2EDuration="2m19.693963874s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.693343286 +0000 UTC m=+168.401159012" watchObservedRunningTime="2026-01-27 20:10:23.693963874 +0000 UTC m=+168.401779580" Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.694347 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.194334635 +0000 UTC m=+168.902150341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.710288 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" event={"ID":"81e828d5-7d0a-451f-98bd-05c2a2fcbea9","Type":"ContainerStarted","Data":"11fcc0d7678fc44e02df659beb90bb5a09d46d36b80f937358b0bbf14f1fd886"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.710690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" event={"ID":"81e828d5-7d0a-451f-98bd-05c2a2fcbea9","Type":"ContainerStarted","Data":"9bb8589268b8487855423bc7bf3a62c5a3bf3ebba3a96f80a55d0522c33801bd"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.715222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" event={"ID":"1a7f117f-1312-480c-a761-22d6fbf087fe","Type":"ContainerStarted","Data":"e0ecc76ef0d42ced7eacbeed5cbc09bce5d3da85b45f65c1cdb8412f49b6d3ad"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.715271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" event={"ID":"1a7f117f-1312-480c-a761-22d6fbf087fe","Type":"ContainerStarted","Data":"9821ca21f7d7e992a810508bbba7ff4cab58d15340648c26cc2b7a49a2c48dce"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.717437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" event={"ID":"cbe6b979-e2aa-46c6-b4b0-67464630cddf","Type":"ContainerStarted","Data":"bd90ba61cf782f70b1b30a477d15f1f5a425289484eb580e402277fc2f5d29f7"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.730702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" event={"ID":"9f472a9b-da89-4553-b0fa-d6c8a2e59cca","Type":"ContainerStarted","Data":"3ee41464802db20c25997aa7cf647f60034fef9d4fb66063bc7a3542fe2fad28"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.731742 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.737342 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" event={"ID":"d8e80fae-5661-4f89-82bd-c1264c0115dd","Type":"ContainerStarted","Data":"4b0b1c68f9e5c75315ac20cac9b8db25c554f0946897f1beb11134c8cca9ff76"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.737385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" event={"ID":"d8e80fae-5661-4f89-82bd-c1264c0115dd","Type":"ContainerStarted","Data":"d9a497c8450ef094928158711e9442a803056d002f379d1fe33e68071b1c20eb"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.742229 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" podStartSLOduration=140.742212425 podStartE2EDuration="2m20.742212425s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.741277398 +0000 UTC m=+168.449093104" watchObservedRunningTime="2026-01-27 20:10:23.742212425 +0000 UTC m=+168.450028131" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.744903 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5vqm9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.744948 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" podUID="9f472a9b-da89-4553-b0fa-d6c8a2e59cca" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.745250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-tpvkr" event={"ID":"64347cf6-4666-4346-b8b3-58300fa9c0c6","Type":"ContainerStarted","Data":"4ed07d252853bffd6d92cd38602f2b7d3e3a1f58f66b052c715c5c3210e6e251"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.747313 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" event={"ID":"904f2c42-7297-4cec-a1ab-c2abd4f5132b","Type":"ContainerStarted","Data":"18b7db30780155ed5a121b6df3674ce49627d0d20a6f134f384bdf784537def6"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.766612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" event={"ID":"55f73f2a-838f-49aa-81ab-1f5ab6de718a","Type":"ContainerStarted","Data":"6f517f96fa497d7158bef0ad4c228d217c3d9791555220808e256e98661ebbb4"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.783676 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" podStartSLOduration=139.783653139 podStartE2EDuration="2m19.783653139s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.781065865 +0000 UTC m=+168.488881591" watchObservedRunningTime="2026-01-27 20:10:23.783653139 +0000 UTC m=+168.491468845" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.790573 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" event={"ID":"3775c5de-f3e1-4bad-a5f3-8622f85db5ad","Type":"ContainerStarted","Data":"a803ccbfd2893021ac9fe263a4c0049c7a1b9dfb5c047364bc8d98d053db73b4"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.797942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.800191 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.300170966 +0000 UTC m=+169.007986762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.805004 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" event={"ID":"805a0505-7eaa-40a2-a363-e986eec80f0e","Type":"ContainerStarted","Data":"b7c4f9df770ac729c0b1eb8bc4fece5b49c5caed2401fa0ca08e36720b3c3c78"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.819922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rsc77" event={"ID":"748214da-1856-4f93-82c5-34403ec46118","Type":"ContainerStarted","Data":"059478df30483fd9208faff0557375b62b1c9c997caf4c0fc66bad6b69130107"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.831665 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" podStartSLOduration=140.831651283 podStartE2EDuration="2m20.831651283s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.826316009 +0000 UTC m=+168.534131715" watchObservedRunningTime="2026-01-27 20:10:23.831651283 +0000 UTC m=+168.539466989" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.848893 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-gp595" event={"ID":"df828c47-fa99-4d7f-b3e5-46abda50e131","Type":"ContainerStarted","Data":"58084e2c577cdda13fad1300e932dc144128ca41f0f133bf04e0c252851c5cc1"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.865907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" event={"ID":"c408a00d-4317-45aa-afc3-eacf9e1be32f","Type":"ContainerStarted","Data":"60c47a31e3e37d97f8ce730b0634d57369cd16c8e877522cac6d618a17a83d41"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.872274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rl4lk" event={"ID":"324e5805-141d-4281-8f8d-909b796a36e3","Type":"ContainerStarted","Data":"778e2b140c7d71031f354202acbfd5f47c151715e523e9f276b7732151be10a3"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.873389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p72qt" event={"ID":"25775e1a-346e-4b05-ae25-819a5aad12b7","Type":"ContainerStarted","Data":"e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.873418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p72qt" event={"ID":"25775e1a-346e-4b05-ae25-819a5aad12b7","Type":"ContainerStarted","Data":"02c9e4062d6e347f44ea1fe99a7fa2dededa9904847ab253e0e01fe1e0d66341"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.882906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" event={"ID":"b3f96545-5f43-4110-8de0-37141493c013","Type":"ContainerStarted","Data":"f99c16c425ca7775f5a1b39c14e686ed39097bb3f251ed235f29a228a1b9753c"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.882948 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.885130 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-scdgl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.885177 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" podUID="b3f96545-5f43-4110-8de0-37141493c013" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.888894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" event={"ID":"0a871399-32f8-4b9b-9118-c7e537864ada","Type":"ContainerStarted","Data":"e87474f93a88e1e15bc2ae1047af68052142bb5e9577ba935f9372dc05710412"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.899356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:23 crc kubenswrapper[4858]: E0127 20:10:23.901709 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.401693702 +0000 UTC m=+169.109509408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.903887 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hn4z" podStartSLOduration=139.903872405 podStartE2EDuration="2m19.903872405s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.870850323 +0000 UTC m=+168.578666049" watchObservedRunningTime="2026-01-27 20:10:23.903872405 +0000 UTC m=+168.611688111" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.904966 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l7rb7" podStartSLOduration=139.904957876 podStartE2EDuration="2m19.904957876s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.902874136 +0000 UTC m=+168.610689852" watchObservedRunningTime="2026-01-27 20:10:23.904957876 +0000 UTC m=+168.612773582" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.915172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" event={"ID":"e38452d4-9405-466f-99cf-0706d9ca1c4f","Type":"ContainerStarted","Data":"df97d4dad85d2d40d3b7f23487199e3b1b9e3dd2cb3a414b9ae6a75a6e548e5d"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.916134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.938594 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" event={"ID":"d7b71f62-e0b7-4903-bb5a-7c081c83fd29","Type":"ContainerStarted","Data":"ad74be925f7758585cdae06c5d34eaad6c03a74f873ffeca436b39eb028bebf1"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.939792 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.949689 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-shmm4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.949763 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" podUID="d7b71f62-e0b7-4903-bb5a-7c081c83fd29" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.967723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" event={"ID":"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0","Type":"ContainerStarted","Data":"b285f9f12cc3a0c453a1bb76dba33c5b287d6458140145b515857be99f97988a"} Jan 27 20:10:23 crc kubenswrapper[4858]: I0127 20:10:23.995016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-g86cr" event={"ID":"3375f803-5cef-4621-b64c-e3682dd4758c","Type":"ContainerStarted","Data":"427d9666d832fd36facb0f418625f06dc52c703c2fc0d63ce45a90a22ed80249"} Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.001965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" event={"ID":"b4d0e2b3-33dc-497e-94d1-4f728ac62fee","Type":"ContainerStarted","Data":"a4fdce15a41d44eea06485c9927a3279580d0eabfcf98dea9fc76b3c9acf508a"} Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.004390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.005820 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.505800523 +0000 UTC m=+169.213616239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.021217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" event={"ID":"d2fddf7b-44dd-4b24-be5e-385e1792abaf","Type":"ContainerStarted","Data":"5167f66553d7a29b376d3156c2b4618ca9259bcab204c473e0842433a6489872"} Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.038474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" event={"ID":"add9b00f-ce10-44d3-ade0-1881523bbefb","Type":"ContainerStarted","Data":"8f8dc5f75ccfb952b33aeed72678d4ede70d0db5e4ed3d5757247ca856d20af2"} Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.039166 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" podStartSLOduration=140.039144184 podStartE2EDuration="2m20.039144184s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:23.992729896 +0000 UTC m=+168.700545622" watchObservedRunningTime="2026-01-27 20:10:24.039144184 +0000 UTC m=+168.746959890" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.039448 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.040328 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-p72qt" podStartSLOduration=141.040320418 podStartE2EDuration="2m21.040320418s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.037789975 +0000 UTC m=+168.745605701" watchObservedRunningTime="2026-01-27 20:10:24.040320418 +0000 UTC m=+168.748136124" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.040506 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-rkkqh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.040563 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" podUID="add9b00f-ce10-44d3-ade0-1881523bbefb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.064978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fzjn9" event={"ID":"0de7aa2b-2206-4637-9950-da5e9e287440","Type":"ContainerStarted","Data":"bf903d70895a97e327cd1b54c4c23a14009b9bd2480a22889d3cdf2a572ff6ba"} Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.065071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fzjn9" event={"ID":"0de7aa2b-2206-4637-9950-da5e9e287440","Type":"ContainerStarted","Data":"74e623ed8f4b6031b132fbfbf2883b956d85f0ddd8031f1aac686054fed6cc3b"} Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.066695 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.067486 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.066927 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dvbh6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.067732 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" podUID="bb41d7df-dacd-41b0-8399-63ddcee318f6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.075894 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.079932 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" podStartSLOduration=141.079907769 podStartE2EDuration="2m21.079907769s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.064211196 +0000 UTC m=+168.772026922" watchObservedRunningTime="2026-01-27 20:10:24.079907769 +0000 UTC m=+168.787723475" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.106829 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.108099 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.608083731 +0000 UTC m=+169.315899497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.111663 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fsx7q" podStartSLOduration=140.111645233 podStartE2EDuration="2m20.111645233s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.11154077 +0000 UTC m=+168.819356486" watchObservedRunningTime="2026-01-27 20:10:24.111645233 +0000 UTC m=+168.819460939" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.117126 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.186897 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" podStartSLOduration=140.186877042 podStartE2EDuration="2m20.186877042s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.178868241 +0000 UTC m=+168.886683947" watchObservedRunningTime="2026-01-27 20:10:24.186877042 +0000 UTC m=+168.894692748" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.200438 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:24 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:24 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:24 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.200872 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.210469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.212101 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.712082338 +0000 UTC m=+169.419898044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.313689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.314211 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.814198061 +0000 UTC m=+169.522013757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.417043 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.417525 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:24.917504959 +0000 UTC m=+169.625320665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.448580 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" podStartSLOduration=140.448560894 podStartE2EDuration="2m20.448560894s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.271182831 +0000 UTC m=+168.978998557" watchObservedRunningTime="2026-01-27 20:10:24.448560894 +0000 UTC m=+169.156376600" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.498829 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" podStartSLOduration=141.498797782 podStartE2EDuration="2m21.498797782s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.451019085 +0000 UTC m=+169.158834811" watchObservedRunningTime="2026-01-27 20:10:24.498797782 +0000 UTC m=+169.206613488" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.501456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.519453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.520440 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.020421525 +0000 UTC m=+169.728237231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.572674 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" podStartSLOduration=141.572651971 podStartE2EDuration="2m21.572651971s" podCreationTimestamp="2026-01-27 20:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.507657707 +0000 UTC m=+169.215473433" watchObservedRunningTime="2026-01-27 20:10:24.572651971 +0000 UTC m=+169.280467677" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.590936 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.621256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.622181 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.122164088 +0000 UTC m=+169.829979794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.660918 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fzjn9" podStartSLOduration=7.660895034 podStartE2EDuration="7.660895034s" podCreationTimestamp="2026-01-27 20:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:24.613746475 +0000 UTC m=+169.321562181" watchObservedRunningTime="2026-01-27 20:10:24.660895034 +0000 UTC m=+169.368710750" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.711772 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.712166 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.713515 4858 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xp2mw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.713588 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" podUID="cbe6b979-e2aa-46c6-b4b0-67464630cddf" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.723309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.723699 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.223682204 +0000 UTC m=+169.931497900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.810384 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.810748 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.825008 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.825185 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.325150809 +0000 UTC m=+170.032966515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.825596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.826194 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.326166658 +0000 UTC m=+170.033982364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.926631 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.926846 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.426814499 +0000 UTC m=+170.134630215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:24 crc kubenswrapper[4858]: I0127 20:10:24.927380 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:24 crc kubenswrapper[4858]: E0127 20:10:24.927895 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.427876249 +0000 UTC m=+170.135691955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.029067 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.029304 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.529268832 +0000 UTC m=+170.237084538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.029397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.029765 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.529754836 +0000 UTC m=+170.237570592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.082220 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" event={"ID":"0a871399-32f8-4b9b-9118-c7e537864ada","Type":"ContainerStarted","Data":"8c22fb42aebd702ceb077bfab8e5d15ba8b4446b5a8ed49a347c0406f4a735fd"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.094224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-69lnx" event={"ID":"c408a00d-4317-45aa-afc3-eacf9e1be32f","Type":"ContainerStarted","Data":"8fa7041aa9e034c7213ce11abb14c54abcb697401df355d6101eb0227fad9877"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.096591 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" event={"ID":"d8e80fae-5661-4f89-82bd-c1264c0115dd","Type":"ContainerStarted","Data":"1f05a5dc5815d0273c37dedb79776248b8af45413f54c4e3e31f2510714f6077"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.102059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" event={"ID":"1a7f117f-1312-480c-a761-22d6fbf087fe","Type":"ContainerStarted","Data":"93741ce8312225ff9a945dfe552a49928e3c56165e453c63833982f2ef9e5738"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.102183 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.104039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" event={"ID":"805a0505-7eaa-40a2-a363-e986eec80f0e","Type":"ContainerStarted","Data":"80a3536293664ba534f61719a042df923812af09d432b3ae9b4c41077ce5f300"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.106873 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" event={"ID":"a18bc75b-fceb-4545-8a48-3296b1ce8f5c","Type":"ContainerStarted","Data":"451c707a5efbb24c5f9418af3f665bfaaf581c5c5fb7186913840eb81acad82f"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.108760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rl4lk" event={"ID":"324e5805-141d-4281-8f8d-909b796a36e3","Type":"ContainerStarted","Data":"0c1be9f2d37454ee3421104ce49c51fe4c6dd642c59725ed390c80179db53b37"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.108934 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.119117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" event={"ID":"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0","Type":"ContainerStarted","Data":"cebf0c2250a546d8fd3e3a4c8d29bc369e0a203f9889447d13cd6ddaaa8ee09d"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.119167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" event={"ID":"ab9ac3bd-eca9-44d5-a5f7-1074bc9889d0","Type":"ContainerStarted","Data":"21f47742f6f8286e0c80e243094aeafe3a5af6251b0610d1b8813c78a26b27fe"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.123643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2szkp" event={"ID":"b4d0e2b3-33dc-497e-94d1-4f728ac62fee","Type":"ContainerStarted","Data":"989605d66a4ffaadeeba82fc371dbd381e52058503e91c907e388aa8f94efefc"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.131197 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.131688 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.631654103 +0000 UTC m=+170.339469809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.135276 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjf26" podStartSLOduration=141.135260637 podStartE2EDuration="2m21.135260637s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:25.133209978 +0000 UTC m=+169.841025674" watchObservedRunningTime="2026-01-27 20:10:25.135260637 +0000 UTC m=+169.843076343" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.140123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" event={"ID":"904f2c42-7297-4cec-a1ab-c2abd4f5132b","Type":"ContainerStarted","Data":"7af1ac2ce1008a3f2e58efe339329d09ccc562b19069365406db0bd2f071d56b"} Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.141296 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5vqm9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.141345 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" podUID="9f472a9b-da89-4553-b0fa-d6c8a2e59cca" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.142664 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-shmm4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.142668 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.142758 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.142790 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-rkkqh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.142698 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" podUID="d7b71f62-e0b7-4903-bb5a-7c081c83fd29" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.142812 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" podUID="add9b00f-ce10-44d3-ade0-1881523bbefb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.144388 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-scdgl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.144452 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" podUID="b3f96545-5f43-4110-8de0-37141493c013" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.161938 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.191721 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:25 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:25 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:25 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.192106 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.229624 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tq895" podStartSLOduration=141.229606616 podStartE2EDuration="2m21.229606616s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:25.228683209 +0000 UTC m=+169.936498915" watchObservedRunningTime="2026-01-27 20:10:25.229606616 +0000 UTC m=+169.937422322" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.231854 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-w27fl" podStartSLOduration=141.231845761 podStartE2EDuration="2m21.231845761s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:25.19957711 +0000 UTC m=+169.907392826" watchObservedRunningTime="2026-01-27 20:10:25.231845761 +0000 UTC m=+169.939661467" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.232110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.233674 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.733658393 +0000 UTC m=+170.441474099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.335243 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.338322 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.83765202 +0000 UTC m=+170.545467726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.358537 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rl4lk" podStartSLOduration=8.358515061 podStartE2EDuration="8.358515061s" podCreationTimestamp="2026-01-27 20:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:25.357153002 +0000 UTC m=+170.064968708" watchObservedRunningTime="2026-01-27 20:10:25.358515061 +0000 UTC m=+170.066330767" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.359693 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qvzfh" podStartSLOduration=141.359686335 podStartE2EDuration="2m21.359686335s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:25.327762295 +0000 UTC m=+170.035578001" watchObservedRunningTime="2026-01-27 20:10:25.359686335 +0000 UTC m=+170.067502061" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.439858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.440276 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:25.940263138 +0000 UTC m=+170.648078844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.445895 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" podStartSLOduration=141.445880099 podStartE2EDuration="2m21.445880099s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:25.41086767 +0000 UTC m=+170.118683386" watchObservedRunningTime="2026-01-27 20:10:25.445880099 +0000 UTC m=+170.153695805" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.448377 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9nxt8" podStartSLOduration=141.448369291 podStartE2EDuration="2m21.448369291s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:25.446720294 +0000 UTC m=+170.154536030" watchObservedRunningTime="2026-01-27 20:10:25.448369291 +0000 UTC m=+170.156184997" Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.541434 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.541858 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.041839585 +0000 UTC m=+170.749655291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.639114 4858 csr.go:261] certificate signing request csr-4jvwr is approved, waiting to be issued Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.642901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.643266 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.143245198 +0000 UTC m=+170.851060964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.666604 4858 csr.go:257] certificate signing request csr-4jvwr is issued Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.744255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.744460 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.244427394 +0000 UTC m=+170.952243100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.744490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.744859 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.244851417 +0000 UTC m=+170.952667123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.845441 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.845655 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.345624681 +0000 UTC m=+171.053440387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.845721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.846024 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.346010442 +0000 UTC m=+171.053826148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.946895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.947110 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.447076205 +0000 UTC m=+171.154891921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:25 crc kubenswrapper[4858]: I0127 20:10:25.947173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:25 crc kubenswrapper[4858]: E0127 20:10:25.947658 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.447648852 +0000 UTC m=+171.155464598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.048362 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.048590 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.54855936 +0000 UTC m=+171.256375086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.048835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.049306 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.549292731 +0000 UTC m=+171.257108437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.150304 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-rkkqh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.150355 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" podUID="add9b00f-ce10-44d3-ade0-1881523bbefb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.150683 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.151029 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.651013553 +0000 UTC m=+171.358829259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.151204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.151499 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.651487287 +0000 UTC m=+171.359302993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.160313 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-qg6xk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.160395 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" podUID="e38452d4-9405-466f-99cf-0706d9ca1c4f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.175538 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.201579 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:26 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:26 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:26 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.201634 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.219716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5vqm9" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.252252 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.252438 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.752408416 +0000 UTC m=+171.460224112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.252520 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.252748 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.258005 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.757990407 +0000 UTC m=+171.465806113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.266420 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fa7e9cb-b195-401a-b57c-bdb47f36ffb8-metrics-certs\") pod \"network-metrics-daemon-j5hlm\" (UID: \"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8\") " pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.354923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.355376 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.855344073 +0000 UTC m=+171.563159779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.456848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.457207 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:26.957194918 +0000 UTC m=+171.665010624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.494046 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j5hlm" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.558017 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.558922 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.058884249 +0000 UTC m=+171.766699955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.660575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.660982 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.160966981 +0000 UTC m=+171.868782687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.668047 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 20:05:25 +0000 UTC, rotation deadline is 2026-12-17 15:00:27.955447453 +0000 UTC Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.668099 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7770h50m1.287351856s for next certificate rotation Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.761909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.762173 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.262125577 +0000 UTC m=+171.969941303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.762505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.762961 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.262944921 +0000 UTC m=+171.970760627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.848342 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-shmm4" Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.863696 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.864029 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.364012013 +0000 UTC m=+172.071827719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:26 crc kubenswrapper[4858]: I0127 20:10:26.965173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:26 crc kubenswrapper[4858]: E0127 20:10:26.965945 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.46593064 +0000 UTC m=+172.173746346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.067309 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.067769 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.567748785 +0000 UTC m=+172.275564491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.176348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.176692 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.676679404 +0000 UTC m=+172.384495110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.198325 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-242rs" Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.206194 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:27 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:27 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:27 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.206254 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.282093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.283924 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.783900525 +0000 UTC m=+172.491716231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.386471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.386881 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.886866973 +0000 UTC m=+172.594682679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.487880 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.488254 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:27.988233994 +0000 UTC m=+172.696049710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.502890 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j5hlm"] Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.589298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.589793 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.089775231 +0000 UTC m=+172.797590947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.693149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.693295 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.193270784 +0000 UTC m=+172.901086500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.694026 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.694361 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.194349835 +0000 UTC m=+172.902165541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.794790 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.795215 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.295195212 +0000 UTC m=+173.003010918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.816950 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4jtm"] Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.818226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.834340 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.896466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.896541 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7cqg\" (UniqueName: \"kubernetes.io/projected/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-kube-api-access-w7cqg\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.896653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-catalog-content\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.896681 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-utilities\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:27 crc kubenswrapper[4858]: E0127 20:10:27.897165 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.39713412 +0000 UTC m=+173.104949996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:27 crc kubenswrapper[4858]: I0127 20:10:27.966590 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4jtm"] Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.003233 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.003602 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.503572718 +0000 UTC m=+173.211388424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.003729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-catalog-content\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.003773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-utilities\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.003846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.003923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7cqg\" (UniqueName: \"kubernetes.io/projected/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-kube-api-access-w7cqg\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.004336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-utilities\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.004446 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.504423322 +0000 UTC m=+173.212239028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.004668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-catalog-content\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.005444 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b9vrj"] Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.006656 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: W0127 20:10:28.057793 4858 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.057839 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.105975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.106213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcc4s\" (UniqueName: \"kubernetes.io/projected/9cdbabda-bda6-438a-a671-0f15b0ad57c0-kube-api-access-hcc4s\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.106347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-catalog-content\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.106390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-utilities\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.106508 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.606491204 +0000 UTC m=+173.314306910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.155483 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7cqg\" (UniqueName: \"kubernetes.io/projected/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-kube-api-access-w7cqg\") pod \"certified-operators-j4jtm\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.192716 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:28 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:28 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:28 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.192770 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.193831 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9vrj"] Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.210507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-catalog-content\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.210620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-utilities\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.210658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcc4s\" (UniqueName: \"kubernetes.io/projected/9cdbabda-bda6-438a-a671-0f15b0ad57c0-kube-api-access-hcc4s\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.210716 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.211094 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.711077388 +0000 UTC m=+173.418893094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.211644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-catalog-content\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.211918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-utilities\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.281085 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2r5qs"] Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.288336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.293491 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcc4s\" (UniqueName: \"kubernetes.io/projected/9cdbabda-bda6-438a-a671-0f15b0ad57c0-kube-api-access-hcc4s\") pod \"community-operators-b9vrj\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.312004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.312654 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.812633825 +0000 UTC m=+173.520449531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.337353 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2r5qs"] Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.358274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" event={"ID":"904f2c42-7297-4cec-a1ab-c2abd4f5132b","Type":"ContainerStarted","Data":"25736c30d1d1246b465d78406fc621c6176861e71597405381cb62fd73d3e8e7"} Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.358314 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" event={"ID":"904f2c42-7297-4cec-a1ab-c2abd4f5132b","Type":"ContainerStarted","Data":"ae583715b21e2262e64d4874cdb9937c0d6f0530a3c5a2d3998a6828098731a2"} Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.371219 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rl5k9"] Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.415971 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.423923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.424293 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:28.924270733 +0000 UTC m=+173.632086439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.429510 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rl5k9"] Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.438072 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.448169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" event={"ID":"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8","Type":"ContainerStarted","Data":"a7ba5dd919983d4c8ecca1f8c895169649c749d6c1b164a5f1d2186175ff87e9"} Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.448223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" event={"ID":"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8","Type":"ContainerStarted","Data":"2e531def03b943f742866da8fa210b0437db68e9e062321f0e88a725ed5701ca"} Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.525003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.525160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdhb\" (UniqueName: \"kubernetes.io/projected/a57b4016-f4b5-4f01-aeed-9a730cd323c1-kube-api-access-bzdhb\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.525183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-utilities\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.525233 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmd5b\" (UniqueName: \"kubernetes.io/projected/da279f23-0e34-40de-9b49-f325361ce0ff-kube-api-access-mmd5b\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.525281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-utilities\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.525307 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-catalog-content\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.525323 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-catalog-content\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.525494 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.02547676 +0000 UTC m=+173.733292466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.629217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.629304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-utilities\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.629336 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-catalog-content\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.629363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-catalog-content\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.629506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdhb\" (UniqueName: \"kubernetes.io/projected/a57b4016-f4b5-4f01-aeed-9a730cd323c1-kube-api-access-bzdhb\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.629539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-utilities\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.629595 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmd5b\" (UniqueName: \"kubernetes.io/projected/da279f23-0e34-40de-9b49-f325361ce0ff-kube-api-access-mmd5b\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.630659 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.130641261 +0000 UTC m=+173.838456967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.631128 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-utilities\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.631269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-catalog-content\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.631324 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-utilities\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.633979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-catalog-content\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.701579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmd5b\" (UniqueName: \"kubernetes.io/projected/da279f23-0e34-40de-9b49-f325361ce0ff-kube-api-access-mmd5b\") pod \"certified-operators-2r5qs\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.719120 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdhb\" (UniqueName: \"kubernetes.io/projected/a57b4016-f4b5-4f01-aeed-9a730cd323c1-kube-api-access-bzdhb\") pod \"community-operators-rl5k9\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.737390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.739046 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.239017025 +0000 UTC m=+173.946832721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.839371 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.839736 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.339723337 +0000 UTC m=+174.047539043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.938694 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg6xk" Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.940227 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:28 crc kubenswrapper[4858]: E0127 20:10:28.940871 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.440832022 +0000 UTC m=+174.148647728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:28 crc kubenswrapper[4858]: I0127 20:10:28.972999 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.033074 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4jtm"] Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.042487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:29 crc kubenswrapper[4858]: E0127 20:10:29.043583 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.543568023 +0000 UTC m=+174.251383729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.143837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:29 crc kubenswrapper[4858]: E0127 20:10:29.143994 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.643966516 +0000 UTC m=+174.351782222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.144150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:29 crc kubenswrapper[4858]: E0127 20:10:29.144482 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.644467671 +0000 UTC m=+174.352283377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.198051 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:29 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:29 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:29 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.198107 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.226413 4858 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.231940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.232243 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.237909 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.245287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:29 crc kubenswrapper[4858]: E0127 20:10:29.245685 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.745663048 +0000 UTC m=+174.453478754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.327200 4858 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T20:10:29.226443104Z","Handler":null,"Name":""} Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.331924 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.331975 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.349392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:29 crc kubenswrapper[4858]: E0127 20:10:29.349751 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 20:10:29.849737887 +0000 UTC m=+174.557553593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8tr47" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.349816 4858 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.349849 4858 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.450079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.476502 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.515873 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j5hlm" event={"ID":"3fa7e9cb-b195-401a-b57c-bdb47f36ffb8","Type":"ContainerStarted","Data":"2df91c1bf36d765bca3cec7acef19d37751d8f8a033b6cb7e37b5d969df363f0"} Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.531104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" event={"ID":"904f2c42-7297-4cec-a1ab-c2abd4f5132b","Type":"ContainerStarted","Data":"a83b0aa27eab252d7c2830bf567655b821dc95bb4fa5a41f446145cf5c62c604"} Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.532807 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerStarted","Data":"f56c1c0fda53b78bb9cea1303a29c2206b2538894952b9d84d118a4a0215ed7a"} Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.532851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerStarted","Data":"ddf909e950a05f7d76440119014b4d10f9a9569d15de226233e901015e8a7662"} Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.534306 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.552277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.560421 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.560493 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.577946 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-j5hlm" podStartSLOduration=145.577927154 podStartE2EDuration="2m25.577927154s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:29.551897124 +0000 UTC m=+174.259712840" watchObservedRunningTime="2026-01-27 20:10:29.577927154 +0000 UTC m=+174.285742860" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.586268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2r5qs"] Jan 27 20:10:29 crc kubenswrapper[4858]: W0127 20:10:29.666914 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda279f23_0e34_40de_9b49_f325361ce0ff.slice/crio-7ee2aa5a351048ada1cec8c7c43360a40e8a7e28ec064a3b69626667ee931686 WatchSource:0}: Error finding container 7ee2aa5a351048ada1cec8c7c43360a40e8a7e28ec064a3b69626667ee931686: Status 404 returned error can't find the container with id 7ee2aa5a351048ada1cec8c7c43360a40e8a7e28ec064a3b69626667ee931686 Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.684807 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5mx8r" podStartSLOduration=12.684788374 podStartE2EDuration="12.684788374s" podCreationTimestamp="2026-01-27 20:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:29.644932316 +0000 UTC m=+174.352748022" watchObservedRunningTime="2026-01-27 20:10:29.684788374 +0000 UTC m=+174.392604080" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.715634 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p68jw"] Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.716929 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.719622 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.722617 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8tr47\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.724710 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p68jw"] Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.751893 4858 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xp2mw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]log ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]etcd ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/max-in-flight-filter ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 20:10:29 crc kubenswrapper[4858]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/openshift.io-startinformers ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 20:10:29 crc kubenswrapper[4858]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 20:10:29 crc kubenswrapper[4858]: livez check failed Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.751956 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" podUID="cbe6b979-e2aa-46c6-b4b0-67464630cddf" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.761467 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-utilities\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.761541 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-catalog-content\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.761588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5c9d\" (UniqueName: \"kubernetes.io/projected/471132af-0b76-4c4a-8560-deedd9d3381b-kube-api-access-j5c9d\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.847918 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.866249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5c9d\" (UniqueName: \"kubernetes.io/projected/471132af-0b76-4c4a-8560-deedd9d3381b-kube-api-access-j5c9d\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.866356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-utilities\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.866417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-catalog-content\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.866957 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-catalog-content\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.867478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-utilities\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.929734 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5c9d\" (UniqueName: \"kubernetes.io/projected/471132af-0b76-4c4a-8560-deedd9d3381b-kube-api-access-j5c9d\") pod \"redhat-marketplace-p68jw\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:29 crc kubenswrapper[4858]: I0127 20:10:29.996203 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rl5k9"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.035767 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.035823 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.035953 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.036009 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.048657 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.049640 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.054011 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.054238 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.095344 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.120322 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.120971 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.147754 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r69km"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.149993 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.177529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-utilities\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.177999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-catalog-content\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.178043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c6f283a-8b12-4b20-901d-2f7e498704a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.178090 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c6f283a-8b12-4b20-901d-2f7e498704a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.178132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrlh6\" (UniqueName: \"kubernetes.io/projected/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-kube-api-access-jrlh6\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.194821 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.201542 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:30 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:30 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:30 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.201619 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.205747 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r69km"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.283502 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rkkqh" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.284610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c6f283a-8b12-4b20-901d-2f7e498704a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.284676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c6f283a-8b12-4b20-901d-2f7e498704a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.284725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrlh6\" (UniqueName: \"kubernetes.io/projected/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-kube-api-access-jrlh6\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.284812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-utilities\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.284848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-catalog-content\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.285382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-catalog-content\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.285453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c6f283a-8b12-4b20-901d-2f7e498704a1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.286489 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-utilities\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.329840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrlh6\" (UniqueName: \"kubernetes.io/projected/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-kube-api-access-jrlh6\") pod \"redhat-marketplace-r69km\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.350527 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c6f283a-8b12-4b20-901d-2f7e498704a1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.397975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.515067 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b9vrj"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.546433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.600732 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8tr47"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.633575 4858 generic.go:334] "Generic (PLEG): container finished" podID="da279f23-0e34-40de-9b49-f325361ce0ff" containerID="5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d" exitCode=0 Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.633705 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5qs" event={"ID":"da279f23-0e34-40de-9b49-f325361ce0ff","Type":"ContainerDied","Data":"5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d"} Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.633754 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5qs" event={"ID":"da279f23-0e34-40de-9b49-f325361ce0ff","Type":"ContainerStarted","Data":"7ee2aa5a351048ada1cec8c7c43360a40e8a7e28ec064a3b69626667ee931686"} Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.674916 4858 generic.go:334] "Generic (PLEG): container finished" podID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerID="f56c1c0fda53b78bb9cea1303a29c2206b2538894952b9d84d118a4a0215ed7a" exitCode=0 Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.675025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerDied","Data":"f56c1c0fda53b78bb9cea1303a29c2206b2538894952b9d84d118a4a0215ed7a"} Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.688960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerStarted","Data":"3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb"} Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.689013 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerStarted","Data":"3bfb75124f1e7466a45a3f3aa3487b5e7735efd5e609f8dc49570d434654b8ad"} Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.872119 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gnzjf"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.873630 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.905586 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.936566 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.937645 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.939584 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gnzjf"] Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.949756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-utilities\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.949831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-catalog-content\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.949875 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwhn\" (UniqueName: \"kubernetes.io/projected/ad57fc45-ce61-4d62-adb4-2a655f77e751-kube-api-access-skwhn\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.958725 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scdgl" Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.963979 4858 patch_prober.go:28] interesting pod/console-f9d7485db-p72qt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 27 20:10:30 crc kubenswrapper[4858]: I0127 20:10:30.964464 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p72qt" podUID="25775e1a-346e-4b05-ae25-819a5aad12b7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.040352 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p68jw"] Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.051290 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skwhn\" (UniqueName: \"kubernetes.io/projected/ad57fc45-ce61-4d62-adb4-2a655f77e751-kube-api-access-skwhn\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.058279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-utilities\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.065217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-catalog-content\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.064026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-utilities\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.068387 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-catalog-content\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.086121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skwhn\" (UniqueName: \"kubernetes.io/projected/ad57fc45-ce61-4d62-adb4-2a655f77e751-kube-api-access-skwhn\") pod \"redhat-operators-gnzjf\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.175531 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.191711 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:31 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:31 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:31 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.191784 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:31 crc kubenswrapper[4858]: W0127 20:10:31.230390 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3c6f283a_8b12_4b20_901d_2f7e498704a1.slice/crio-ab3d4b557ab9a63984c5b0e188923e28520710dfa6ac003b338309b9f1772ff1 WatchSource:0}: Error finding container ab3d4b557ab9a63984c5b0e188923e28520710dfa6ac003b338309b9f1772ff1: Status 404 returned error can't find the container with id ab3d4b557ab9a63984c5b0e188923e28520710dfa6ac003b338309b9f1772ff1 Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.266393 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r69km"] Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.276911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.281120 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hps49"] Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.282980 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.288708 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hps49"] Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.369810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-utilities\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.370370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-catalog-content\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.370395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p4f\" (UniqueName: \"kubernetes.io/projected/26fc1461-1071-4f74-9d54-4de6f9a268dc-kube-api-access-72p4f\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.472119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-catalog-content\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.472180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p4f\" (UniqueName: \"kubernetes.io/projected/26fc1461-1071-4f74-9d54-4de6f9a268dc-kube-api-access-72p4f\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.472276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-utilities\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.472868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-utilities\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.473123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-catalog-content\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.509481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p4f\" (UniqueName: \"kubernetes.io/projected/26fc1461-1071-4f74-9d54-4de6f9a268dc-kube-api-access-72p4f\") pod \"redhat-operators-hps49\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.624110 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.688949 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gnzjf"] Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.708195 4858 generic.go:334] "Generic (PLEG): container finished" podID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerID="3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb" exitCode=0 Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.708279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerDied","Data":"3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.711960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3c6f283a-8b12-4b20-901d-2f7e498704a1","Type":"ContainerStarted","Data":"ab3d4b557ab9a63984c5b0e188923e28520710dfa6ac003b338309b9f1772ff1"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.729214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" event={"ID":"631986f5-1f28-45ac-8390-c3ac0f3920c0","Type":"ContainerStarted","Data":"5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.729262 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" event={"ID":"631986f5-1f28-45ac-8390-c3ac0f3920c0","Type":"ContainerStarted","Data":"bbe467fdf212450910655a0703c1612d811ee7bfe5deff20b474de7d4e6ae440"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.729561 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.746185 4858 generic.go:334] "Generic (PLEG): container finished" podID="81e828d5-7d0a-451f-98bd-05c2a2fcbea9" containerID="11fcc0d7678fc44e02df659beb90bb5a09d46d36b80f937358b0bbf14f1fd886" exitCode=0 Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.746276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" event={"ID":"81e828d5-7d0a-451f-98bd-05c2a2fcbea9","Type":"ContainerDied","Data":"11fcc0d7678fc44e02df659beb90bb5a09d46d36b80f937358b0bbf14f1fd886"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.751085 4858 generic.go:334] "Generic (PLEG): container finished" podID="471132af-0b76-4c4a-8560-deedd9d3381b" containerID="79344889d771d89a68ad5936af47e8b5245d725f9edea3ac40a7a35c9a42c153" exitCode=0 Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.751158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p68jw" event={"ID":"471132af-0b76-4c4a-8560-deedd9d3381b","Type":"ContainerDied","Data":"79344889d771d89a68ad5936af47e8b5245d725f9edea3ac40a7a35c9a42c153"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.751182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p68jw" event={"ID":"471132af-0b76-4c4a-8560-deedd9d3381b","Type":"ContainerStarted","Data":"b0fd166eba88b2fe2f219f4341a0cb796c6f8047f5c58d572bfc7430531c7f64"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.754746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r69km" event={"ID":"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a","Type":"ContainerStarted","Data":"9701b934e9478a2313bd1aca320bef1a7eec9ba5a784ca52350fc8d885a48237"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.770657 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9vrj" event={"ID":"9cdbabda-bda6-438a-a671-0f15b0ad57c0","Type":"ContainerDied","Data":"80e3aab9b212016ff4ea276f584dc450ff24bf8be30a06d755e31496640d91e9"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.770441 4858 generic.go:334] "Generic (PLEG): container finished" podID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerID="80e3aab9b212016ff4ea276f584dc450ff24bf8be30a06d755e31496640d91e9" exitCode=0 Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.772540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9vrj" event={"ID":"9cdbabda-bda6-438a-a671-0f15b0ad57c0","Type":"ContainerStarted","Data":"54d418b54f6d1ead93dfaf6c91b96200729e2bb6a2fdf8e414ab55a3c3de6298"} Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.795725 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" podStartSLOduration=147.795679335 podStartE2EDuration="2m27.795679335s" podCreationTimestamp="2026-01-27 20:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:31.760983185 +0000 UTC m=+176.468798911" watchObservedRunningTime="2026-01-27 20:10:31.795679335 +0000 UTC m=+176.503495081" Jan 27 20:10:31 crc kubenswrapper[4858]: W0127 20:10:31.806839 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad57fc45_ce61_4d62_adb4_2a655f77e751.slice/crio-64e1292b403c673354237bd33a182b093617c923564517838df96d91f1cea25f WatchSource:0}: Error finding container 64e1292b403c673354237bd33a182b093617c923564517838df96d91f1cea25f: Status 404 returned error can't find the container with id 64e1292b403c673354237bd33a182b093617c923564517838df96d91f1cea25f Jan 27 20:10:31 crc kubenswrapper[4858]: I0127 20:10:31.977011 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hps49"] Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.206898 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:32 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:32 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:32 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.207316 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.406467 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rl4lk" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.500880 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.508792 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.511876 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.511955 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.513708 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.596079 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2695b681-27c9-46c4-b491-380bdcd24329-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.596164 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2695b681-27c9-46c4-b491-380bdcd24329-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.697326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2695b681-27c9-46c4-b491-380bdcd24329-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.697458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2695b681-27c9-46c4-b491-380bdcd24329-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.697678 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2695b681-27c9-46c4-b491-380bdcd24329-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.730685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2695b681-27c9-46c4-b491-380bdcd24329-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.799242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3c6f283a-8b12-4b20-901d-2f7e498704a1","Type":"ContainerStarted","Data":"a935424abe05cdd9b688878e6236a59cfdd0b0377e2cb52e4b9a8886e329d81a"} Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.801954 4858 generic.go:334] "Generic (PLEG): container finished" podID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerID="e1361cc076754f188cd1b18d242748ceb380025b17c1ba6ba90adebe607eb089" exitCode=0 Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.802007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnzjf" event={"ID":"ad57fc45-ce61-4d62-adb4-2a655f77e751","Type":"ContainerDied","Data":"e1361cc076754f188cd1b18d242748ceb380025b17c1ba6ba90adebe607eb089"} Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.802029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnzjf" event={"ID":"ad57fc45-ce61-4d62-adb4-2a655f77e751","Type":"ContainerStarted","Data":"64e1292b403c673354237bd33a182b093617c923564517838df96d91f1cea25f"} Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.807869 4858 generic.go:334] "Generic (PLEG): container finished" podID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerID="f40ccdd5765bb6f1c3c437ba01554e388be38268f02abc05de4fe0b8b01e9205" exitCode=0 Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.807943 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r69km" event={"ID":"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a","Type":"ContainerDied","Data":"f40ccdd5765bb6f1c3c437ba01554e388be38268f02abc05de4fe0b8b01e9205"} Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.815284 4858 generic.go:334] "Generic (PLEG): container finished" podID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerID="27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f" exitCode=0 Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.816731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hps49" event={"ID":"26fc1461-1071-4f74-9d54-4de6f9a268dc","Type":"ContainerDied","Data":"27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f"} Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.816775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hps49" event={"ID":"26fc1461-1071-4f74-9d54-4de6f9a268dc","Type":"ContainerStarted","Data":"b03964bd4af5347b8be478005b4a868b2894f9544c83d389b074c594365406ff"} Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.830675 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.830653395 podStartE2EDuration="2.830653395s" podCreationTimestamp="2026-01-27 20:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:32.819787022 +0000 UTC m=+177.527602728" watchObservedRunningTime="2026-01-27 20:10:32.830653395 +0000 UTC m=+177.538469101" Jan 27 20:10:32 crc kubenswrapper[4858]: I0127 20:10:32.860856 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.191349 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:33 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:33 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:33 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.191630 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.227279 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.309413 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-secret-volume\") pod \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.309599 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjjcq\" (UniqueName: \"kubernetes.io/projected/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-kube-api-access-pjjcq\") pod \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.309696 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-config-volume\") pod \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\" (UID: \"81e828d5-7d0a-451f-98bd-05c2a2fcbea9\") " Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.310623 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-config-volume" (OuterVolumeSpecName: "config-volume") pod "81e828d5-7d0a-451f-98bd-05c2a2fcbea9" (UID: "81e828d5-7d0a-451f-98bd-05c2a2fcbea9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.326778 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-kube-api-access-pjjcq" (OuterVolumeSpecName: "kube-api-access-pjjcq") pod "81e828d5-7d0a-451f-98bd-05c2a2fcbea9" (UID: "81e828d5-7d0a-451f-98bd-05c2a2fcbea9"). InnerVolumeSpecName "kube-api-access-pjjcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.330919 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81e828d5-7d0a-451f-98bd-05c2a2fcbea9" (UID: "81e828d5-7d0a-451f-98bd-05c2a2fcbea9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.412341 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjjcq\" (UniqueName: \"kubernetes.io/projected/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-kube-api-access-pjjcq\") on node \"crc\" DevicePath \"\"" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.412405 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.412421 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81e828d5-7d0a-451f-98bd-05c2a2fcbea9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.492563 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.827152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" event={"ID":"81e828d5-7d0a-451f-98bd-05c2a2fcbea9","Type":"ContainerDied","Data":"9bb8589268b8487855423bc7bf3a62c5a3bf3ebba3a96f80a55d0522c33801bd"} Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.827198 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb8589268b8487855423bc7bf3a62c5a3bf3ebba3a96f80a55d0522c33801bd" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.827272 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5" Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.836485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2695b681-27c9-46c4-b491-380bdcd24329","Type":"ContainerStarted","Data":"37de15ccddcb6f8729aee66ec2fff619d5f08cc461ac310b9a7cf0d9e7cc11f5"} Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.855876 4858 generic.go:334] "Generic (PLEG): container finished" podID="3c6f283a-8b12-4b20-901d-2f7e498704a1" containerID="a935424abe05cdd9b688878e6236a59cfdd0b0377e2cb52e4b9a8886e329d81a" exitCode=0 Jan 27 20:10:33 crc kubenswrapper[4858]: I0127 20:10:33.855929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3c6f283a-8b12-4b20-901d-2f7e498704a1","Type":"ContainerDied","Data":"a935424abe05cdd9b688878e6236a59cfdd0b0377e2cb52e4b9a8886e329d81a"} Jan 27 20:10:34 crc kubenswrapper[4858]: I0127 20:10:34.190683 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:34 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:34 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:34 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:34 crc kubenswrapper[4858]: I0127 20:10:34.190967 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:34 crc kubenswrapper[4858]: I0127 20:10:34.717244 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:34 crc kubenswrapper[4858]: I0127 20:10:34.724633 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xp2mw" Jan 27 20:10:34 crc kubenswrapper[4858]: I0127 20:10:34.919698 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2695b681-27c9-46c4-b491-380bdcd24329","Type":"ContainerStarted","Data":"aecad7ee0e7472ccc1c4982c6c15b9d1b3384ad6b4f26bc37df0b02fb3718a4b"} Jan 27 20:10:34 crc kubenswrapper[4858]: I0127 20:10:34.966943 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.966919447 podStartE2EDuration="2.966919447s" podCreationTimestamp="2026-01-27 20:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:10:34.957412013 +0000 UTC m=+179.665227739" watchObservedRunningTime="2026-01-27 20:10:34.966919447 +0000 UTC m=+179.674735153" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.195954 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:35 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:35 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:35 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.196046 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.554926 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.657615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c6f283a-8b12-4b20-901d-2f7e498704a1-kube-api-access\") pod \"3c6f283a-8b12-4b20-901d-2f7e498704a1\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.657785 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c6f283a-8b12-4b20-901d-2f7e498704a1-kubelet-dir\") pod \"3c6f283a-8b12-4b20-901d-2f7e498704a1\" (UID: \"3c6f283a-8b12-4b20-901d-2f7e498704a1\") " Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.658381 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c6f283a-8b12-4b20-901d-2f7e498704a1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c6f283a-8b12-4b20-901d-2f7e498704a1" (UID: "3c6f283a-8b12-4b20-901d-2f7e498704a1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.668787 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6f283a-8b12-4b20-901d-2f7e498704a1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c6f283a-8b12-4b20-901d-2f7e498704a1" (UID: "3c6f283a-8b12-4b20-901d-2f7e498704a1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.759408 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c6f283a-8b12-4b20-901d-2f7e498704a1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.759439 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c6f283a-8b12-4b20-901d-2f7e498704a1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.933834 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3c6f283a-8b12-4b20-901d-2f7e498704a1","Type":"ContainerDied","Data":"ab3d4b557ab9a63984c5b0e188923e28520710dfa6ac003b338309b9f1772ff1"} Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.933856 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.933871 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3d4b557ab9a63984c5b0e188923e28520710dfa6ac003b338309b9f1772ff1" Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.955097 4858 generic.go:334] "Generic (PLEG): container finished" podID="2695b681-27c9-46c4-b491-380bdcd24329" containerID="aecad7ee0e7472ccc1c4982c6c15b9d1b3384ad6b4f26bc37df0b02fb3718a4b" exitCode=0 Jan 27 20:10:35 crc kubenswrapper[4858]: I0127 20:10:35.955148 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2695b681-27c9-46c4-b491-380bdcd24329","Type":"ContainerDied","Data":"aecad7ee0e7472ccc1c4982c6c15b9d1b3384ad6b4f26bc37df0b02fb3718a4b"} Jan 27 20:10:36 crc kubenswrapper[4858]: I0127 20:10:36.191038 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:36 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:36 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:36 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:36 crc kubenswrapper[4858]: I0127 20:10:36.191316 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.200764 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:37 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:37 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:37 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.201127 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.416272 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.501566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2695b681-27c9-46c4-b491-380bdcd24329-kube-api-access\") pod \"2695b681-27c9-46c4-b491-380bdcd24329\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.501652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2695b681-27c9-46c4-b491-380bdcd24329-kubelet-dir\") pod \"2695b681-27c9-46c4-b491-380bdcd24329\" (UID: \"2695b681-27c9-46c4-b491-380bdcd24329\") " Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.501813 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2695b681-27c9-46c4-b491-380bdcd24329-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2695b681-27c9-46c4-b491-380bdcd24329" (UID: "2695b681-27c9-46c4-b491-380bdcd24329"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.501983 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2695b681-27c9-46c4-b491-380bdcd24329-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.509747 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2695b681-27c9-46c4-b491-380bdcd24329-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2695b681-27c9-46c4-b491-380bdcd24329" (UID: "2695b681-27c9-46c4-b491-380bdcd24329"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:10:37 crc kubenswrapper[4858]: I0127 20:10:37.607301 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2695b681-27c9-46c4-b491-380bdcd24329-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:10:38 crc kubenswrapper[4858]: I0127 20:10:38.036131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2695b681-27c9-46c4-b491-380bdcd24329","Type":"ContainerDied","Data":"37de15ccddcb6f8729aee66ec2fff619d5f08cc461ac310b9a7cf0d9e7cc11f5"} Jan 27 20:10:38 crc kubenswrapper[4858]: I0127 20:10:38.036416 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37de15ccddcb6f8729aee66ec2fff619d5f08cc461ac310b9a7cf0d9e7cc11f5" Jan 27 20:10:38 crc kubenswrapper[4858]: I0127 20:10:38.036192 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 20:10:38 crc kubenswrapper[4858]: I0127 20:10:38.204333 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:38 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:38 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:38 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:38 crc kubenswrapper[4858]: I0127 20:10:38.204460 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:39 crc kubenswrapper[4858]: I0127 20:10:39.191673 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:39 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:39 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:39 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:39 crc kubenswrapper[4858]: I0127 20:10:39.191748 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.032586 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.032678 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.032861 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.032969 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.189459 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:40 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:40 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:40 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.189618 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.917895 4858 patch_prober.go:28] interesting pod/console-f9d7485db-p72qt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 27 20:10:40 crc kubenswrapper[4858]: I0127 20:10:40.917949 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p72qt" podUID="25775e1a-346e-4b05-ae25-819a5aad12b7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 27 20:10:41 crc kubenswrapper[4858]: I0127 20:10:41.191061 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:41 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:41 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:41 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:41 crc kubenswrapper[4858]: I0127 20:10:41.191132 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:42 crc kubenswrapper[4858]: I0127 20:10:42.197453 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:42 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:42 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:42 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:42 crc kubenswrapper[4858]: I0127 20:10:42.197885 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:43 crc kubenswrapper[4858]: I0127 20:10:43.190072 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:43 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Jan 27 20:10:43 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:43 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:43 crc kubenswrapper[4858]: I0127 20:10:43.190119 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:44 crc kubenswrapper[4858]: I0127 20:10:44.189618 4858 patch_prober.go:28] interesting pod/router-default-5444994796-68tdw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 20:10:44 crc kubenswrapper[4858]: [+]has-synced ok Jan 27 20:10:44 crc kubenswrapper[4858]: [+]process-running ok Jan 27 20:10:44 crc kubenswrapper[4858]: healthz check failed Jan 27 20:10:44 crc kubenswrapper[4858]: I0127 20:10:44.189664 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-68tdw" podUID="83766314-dad9-48dc-bd66-eea0bea1cefe" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 20:10:45 crc kubenswrapper[4858]: I0127 20:10:45.190349 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:45 crc kubenswrapper[4858]: I0127 20:10:45.198425 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-68tdw" Jan 27 20:10:49 crc kubenswrapper[4858]: I0127 20:10:49.858443 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.029434 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.029842 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.029652 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.029897 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.029940 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.030511 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"e679fc9d55876bf5591acb69cf021d59e1e8fd92bf17d522b95110672135fd45"} pod="openshift-console/downloads-7954f5f757-xpxs8" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.030599 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.030649 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" containerID="cri-o://e679fc9d55876bf5591acb69cf021d59e1e8fd92bf17d522b95110672135fd45" gracePeriod=2 Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.030694 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.922902 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:50 crc kubenswrapper[4858]: I0127 20:10:50.929167 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:10:51 crc kubenswrapper[4858]: I0127 20:10:51.296648 4858 generic.go:334] "Generic (PLEG): container finished" podID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerID="e679fc9d55876bf5591acb69cf021d59e1e8fd92bf17d522b95110672135fd45" exitCode=0 Jan 27 20:10:51 crc kubenswrapper[4858]: I0127 20:10:51.296797 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xpxs8" event={"ID":"1520e31e-c4b3-4df3-a8cc-db7b0daf491f","Type":"ContainerDied","Data":"e679fc9d55876bf5591acb69cf021d59e1e8fd92bf17d522b95110672135fd45"} Jan 27 20:10:59 crc kubenswrapper[4858]: I0127 20:10:59.328708 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:10:59 crc kubenswrapper[4858]: I0127 20:10:59.329513 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:11:00 crc kubenswrapper[4858]: I0127 20:11:00.030473 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:00 crc kubenswrapper[4858]: I0127 20:11:00.030597 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:00 crc kubenswrapper[4858]: I0127 20:11:00.907969 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5cfl5" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.496738 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 20:11:08 crc kubenswrapper[4858]: E0127 20:11:08.497581 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2695b681-27c9-46c4-b491-380bdcd24329" containerName="pruner" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.497594 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2695b681-27c9-46c4-b491-380bdcd24329" containerName="pruner" Jan 27 20:11:08 crc kubenswrapper[4858]: E0127 20:11:08.497607 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e828d5-7d0a-451f-98bd-05c2a2fcbea9" containerName="collect-profiles" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.497613 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e828d5-7d0a-451f-98bd-05c2a2fcbea9" containerName="collect-profiles" Jan 27 20:11:08 crc kubenswrapper[4858]: E0127 20:11:08.497631 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6f283a-8b12-4b20-901d-2f7e498704a1" containerName="pruner" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.497637 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6f283a-8b12-4b20-901d-2f7e498704a1" containerName="pruner" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.497738 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2695b681-27c9-46c4-b491-380bdcd24329" containerName="pruner" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.497750 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6f283a-8b12-4b20-901d-2f7e498704a1" containerName="pruner" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.497758 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e828d5-7d0a-451f-98bd-05c2a2fcbea9" containerName="collect-profiles" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.498186 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.500392 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.500968 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.503222 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.524692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1be45ab-a4ca-4afe-a73c-4759229a916e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.524777 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1be45ab-a4ca-4afe-a73c-4759229a916e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.626080 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1be45ab-a4ca-4afe-a73c-4759229a916e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.626211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1be45ab-a4ca-4afe-a73c-4759229a916e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.626291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1be45ab-a4ca-4afe-a73c-4759229a916e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.644536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1be45ab-a4ca-4afe-a73c-4759229a916e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:08 crc kubenswrapper[4858]: I0127 20:11:08.819948 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:10 crc kubenswrapper[4858]: I0127 20:11:10.030326 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:10 crc kubenswrapper[4858]: I0127 20:11:10.031414 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:12 crc kubenswrapper[4858]: E0127 20:11:12.770428 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 20:11:12 crc kubenswrapper[4858]: E0127 20:11:12.770697 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5c9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p68jw_openshift-marketplace(471132af-0b76-4c4a-8560-deedd9d3381b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:12 crc kubenswrapper[4858]: E0127 20:11:12.771914 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p68jw" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" Jan 27 20:11:12 crc kubenswrapper[4858]: I0127 20:11:12.889143 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 20:11:12 crc kubenswrapper[4858]: I0127 20:11:12.892055 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:12 crc kubenswrapper[4858]: I0127 20:11:12.898651 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 20:11:12 crc kubenswrapper[4858]: I0127 20:11:12.980497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-var-lock\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:12 crc kubenswrapper[4858]: I0127 20:11:12.980568 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e50de7cf-1829-431e-8655-0e948b1695f7-kube-api-access\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:12 crc kubenswrapper[4858]: I0127 20:11:12.980605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:13 crc kubenswrapper[4858]: I0127 20:11:13.084210 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-var-lock\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:13 crc kubenswrapper[4858]: I0127 20:11:13.084264 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e50de7cf-1829-431e-8655-0e948b1695f7-kube-api-access\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:13 crc kubenswrapper[4858]: I0127 20:11:13.084302 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:13 crc kubenswrapper[4858]: I0127 20:11:13.084314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-var-lock\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:13 crc kubenswrapper[4858]: I0127 20:11:13.084356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:13 crc kubenswrapper[4858]: I0127 20:11:13.101740 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e50de7cf-1829-431e-8655-0e948b1695f7-kube-api-access\") pod \"installer-9-crc\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:13 crc kubenswrapper[4858]: I0127 20:11:13.224914 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:11:16 crc kubenswrapper[4858]: E0127 20:11:16.076143 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p68jw" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" Jan 27 20:11:17 crc kubenswrapper[4858]: E0127 20:11:17.163317 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 20:11:17 crc kubenswrapper[4858]: E0127 20:11:17.163764 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7cqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j4jtm_openshift-marketplace(405f7c13-54ae-46fa-99c1-7c8a61c2f3bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:17 crc kubenswrapper[4858]: E0127 20:11:17.165421 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j4jtm" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" Jan 27 20:11:20 crc kubenswrapper[4858]: E0127 20:11:20.000963 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j4jtm" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" Jan 27 20:11:20 crc kubenswrapper[4858]: I0127 20:11:20.028997 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:20 crc kubenswrapper[4858]: I0127 20:11:20.029044 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:21 crc kubenswrapper[4858]: E0127 20:11:21.725214 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 20:11:21 crc kubenswrapper[4858]: E0127 20:11:21.725693 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzdhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rl5k9_openshift-marketplace(a57b4016-f4b5-4f01-aeed-9a730cd323c1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:21 crc kubenswrapper[4858]: E0127 20:11:21.726972 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rl5k9" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" Jan 27 20:11:22 crc kubenswrapper[4858]: E0127 20:11:22.504373 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 20:11:22 crc kubenswrapper[4858]: E0127 20:11:22.504518 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmd5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2r5qs_openshift-marketplace(da279f23-0e34-40de-9b49-f325361ce0ff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:22 crc kubenswrapper[4858]: E0127 20:11:22.505770 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2r5qs" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" Jan 27 20:11:22 crc kubenswrapper[4858]: E0127 20:11:22.518279 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 20:11:22 crc kubenswrapper[4858]: E0127 20:11:22.518432 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcc4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b9vrj_openshift-marketplace(9cdbabda-bda6-438a-a671-0f15b0ad57c0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:22 crc kubenswrapper[4858]: E0127 20:11:22.519611 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b9vrj" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.095360 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rl5k9" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.095457 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b9vrj" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.095449 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2r5qs" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.323923 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.324636 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skwhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gnzjf_openshift-marketplace(ad57fc45-ce61-4d62-adb4-2a655f77e751): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.325821 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gnzjf" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.425162 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.425344 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrlh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r69km_openshift-marketplace(d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.426508 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r69km" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" Jan 27 20:11:25 crc kubenswrapper[4858]: I0127 20:11:25.468190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xpxs8" event={"ID":"1520e31e-c4b3-4df3-a8cc-db7b0daf491f","Type":"ContainerStarted","Data":"4936332ca0180c1e07082cba47bdeed898d62bb2df7e26d8c9619d60ec0f30f5"} Jan 27 20:11:25 crc kubenswrapper[4858]: I0127 20:11:25.468884 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:25 crc kubenswrapper[4858]: I0127 20:11:25.468922 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.470576 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gnzjf" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.470576 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r69km" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" Jan 27 20:11:25 crc kubenswrapper[4858]: I0127 20:11:25.576316 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 20:11:25 crc kubenswrapper[4858]: I0127 20:11:25.619132 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 20:11:25 crc kubenswrapper[4858]: W0127 20:11:25.624569 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode50de7cf_1829_431e_8655_0e948b1695f7.slice/crio-ce509a6cf06deb01e804fa998673d6ffbc71838fd8d9634015af8dfbc215ba72 WatchSource:0}: Error finding container ce509a6cf06deb01e804fa998673d6ffbc71838fd8d9634015af8dfbc215ba72: Status 404 returned error can't find the container with id ce509a6cf06deb01e804fa998673d6ffbc71838fd8d9634015af8dfbc215ba72 Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.669302 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.669499 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72p4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hps49_openshift-marketplace(26fc1461-1071-4f74-9d54-4de6f9a268dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:11:25 crc kubenswrapper[4858]: E0127 20:11:25.670716 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hps49" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.474727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e50de7cf-1829-431e-8655-0e948b1695f7","Type":"ContainerStarted","Data":"106fbbcb006d9526b110f37e092977076ed9a09ff41aa2933c387bcdf206f31c"} Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.475139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e50de7cf-1829-431e-8655-0e948b1695f7","Type":"ContainerStarted","Data":"ce509a6cf06deb01e804fa998673d6ffbc71838fd8d9634015af8dfbc215ba72"} Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.476083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f1be45ab-a4ca-4afe-a73c-4759229a916e","Type":"ContainerStarted","Data":"9783888abf8cfe9a537531122b25ad347ad5851894eefba79924316e91a859c0"} Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.476113 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f1be45ab-a4ca-4afe-a73c-4759229a916e","Type":"ContainerStarted","Data":"39c275f33ce29a49f687e0923b5ca07d7a2c7fdcc2295363797486aaec1fb49e"} Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.476203 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.476910 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.476983 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.500685 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=14.500665768 podStartE2EDuration="14.500665768s" podCreationTimestamp="2026-01-27 20:11:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:11:26.495365042 +0000 UTC m=+231.203180768" watchObservedRunningTime="2026-01-27 20:11:26.500665768 +0000 UTC m=+231.208481464" Jan 27 20:11:26 crc kubenswrapper[4858]: E0127 20:11:26.504737 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hps49" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" Jan 27 20:11:26 crc kubenswrapper[4858]: I0127 20:11:26.523159 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=18.52313748 podStartE2EDuration="18.52313748s" podCreationTimestamp="2026-01-27 20:11:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:11:26.517256826 +0000 UTC m=+231.225072542" watchObservedRunningTime="2026-01-27 20:11:26.52313748 +0000 UTC m=+231.230953196" Jan 27 20:11:27 crc kubenswrapper[4858]: I0127 20:11:27.487842 4858 generic.go:334] "Generic (PLEG): container finished" podID="f1be45ab-a4ca-4afe-a73c-4759229a916e" containerID="9783888abf8cfe9a537531122b25ad347ad5851894eefba79924316e91a859c0" exitCode=0 Jan 27 20:11:27 crc kubenswrapper[4858]: I0127 20:11:27.488096 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f1be45ab-a4ca-4afe-a73c-4759229a916e","Type":"ContainerDied","Data":"9783888abf8cfe9a537531122b25ad347ad5851894eefba79924316e91a859c0"} Jan 27 20:11:27 crc kubenswrapper[4858]: I0127 20:11:27.489132 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:27 crc kubenswrapper[4858]: I0127 20:11:27.489174 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:28 crc kubenswrapper[4858]: I0127 20:11:28.762085 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:28 crc kubenswrapper[4858]: I0127 20:11:28.882374 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1be45ab-a4ca-4afe-a73c-4759229a916e-kubelet-dir\") pod \"f1be45ab-a4ca-4afe-a73c-4759229a916e\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " Jan 27 20:11:28 crc kubenswrapper[4858]: I0127 20:11:28.882469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1be45ab-a4ca-4afe-a73c-4759229a916e-kube-api-access\") pod \"f1be45ab-a4ca-4afe-a73c-4759229a916e\" (UID: \"f1be45ab-a4ca-4afe-a73c-4759229a916e\") " Jan 27 20:11:28 crc kubenswrapper[4858]: I0127 20:11:28.883984 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1be45ab-a4ca-4afe-a73c-4759229a916e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f1be45ab-a4ca-4afe-a73c-4759229a916e" (UID: "f1be45ab-a4ca-4afe-a73c-4759229a916e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:11:28 crc kubenswrapper[4858]: I0127 20:11:28.889821 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1be45ab-a4ca-4afe-a73c-4759229a916e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f1be45ab-a4ca-4afe-a73c-4759229a916e" (UID: "f1be45ab-a4ca-4afe-a73c-4759229a916e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:11:28 crc kubenswrapper[4858]: I0127 20:11:28.984072 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1be45ab-a4ca-4afe-a73c-4759229a916e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:28 crc kubenswrapper[4858]: I0127 20:11:28.984114 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1be45ab-a4ca-4afe-a73c-4759229a916e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.329488 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.329873 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.329921 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.330518 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.331138 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689" gracePeriod=600 Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.503427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f1be45ab-a4ca-4afe-a73c-4759229a916e","Type":"ContainerDied","Data":"39c275f33ce29a49f687e0923b5ca07d7a2c7fdcc2295363797486aaec1fb49e"} Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.503478 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.503484 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c275f33ce29a49f687e0923b5ca07d7a2c7fdcc2295363797486aaec1fb49e" Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.505572 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689" exitCode=0 Jan 27 20:11:29 crc kubenswrapper[4858]: I0127 20:11:29.505609 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689"} Jan 27 20:11:30 crc kubenswrapper[4858]: I0127 20:11:30.029971 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:30 crc kubenswrapper[4858]: I0127 20:11:30.030889 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:30 crc kubenswrapper[4858]: I0127 20:11:30.029996 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:30 crc kubenswrapper[4858]: I0127 20:11:30.031218 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:30 crc kubenswrapper[4858]: I0127 20:11:30.512543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"f523d2a034fb7aa3deeabfd7fe2846140bad94ae6e8919a72e4a06a8629bcf50"} Jan 27 20:11:33 crc kubenswrapper[4858]: I0127 20:11:33.531735 4858 generic.go:334] "Generic (PLEG): container finished" podID="471132af-0b76-4c4a-8560-deedd9d3381b" containerID="304a0da64956f115429c2092b3c2e57f8858c6aa349a00a6b61f530d7a0dac49" exitCode=0 Jan 27 20:11:33 crc kubenswrapper[4858]: I0127 20:11:33.531825 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p68jw" event={"ID":"471132af-0b76-4c4a-8560-deedd9d3381b","Type":"ContainerDied","Data":"304a0da64956f115429c2092b3c2e57f8858c6aa349a00a6b61f530d7a0dac49"} Jan 27 20:11:35 crc kubenswrapper[4858]: I0127 20:11:35.663705 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mzt2r"] Jan 27 20:11:38 crc kubenswrapper[4858]: I0127 20:11:38.561261 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p68jw" event={"ID":"471132af-0b76-4c4a-8560-deedd9d3381b","Type":"ContainerStarted","Data":"9ce77be1574f0d928284a55cd7191e956caa7d296dd3fc2b9e8575e2bbceb4b1"} Jan 27 20:11:38 crc kubenswrapper[4858]: I0127 20:11:38.581088 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p68jw" podStartSLOduration=4.538214461 podStartE2EDuration="1m9.581066164s" podCreationTimestamp="2026-01-27 20:10:29 +0000 UTC" firstStartedPulling="2026-01-27 20:10:31.753760487 +0000 UTC m=+176.461576193" lastFinishedPulling="2026-01-27 20:11:36.79661219 +0000 UTC m=+241.504427896" observedRunningTime="2026-01-27 20:11:38.580457795 +0000 UTC m=+243.288273511" watchObservedRunningTime="2026-01-27 20:11:38.581066164 +0000 UTC m=+243.288881870" Jan 27 20:11:40 crc kubenswrapper[4858]: I0127 20:11:40.029038 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:40 crc kubenswrapper[4858]: I0127 20:11:40.029103 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:40 crc kubenswrapper[4858]: I0127 20:11:40.029399 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-xpxs8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 27 20:11:40 crc kubenswrapper[4858]: I0127 20:11:40.029435 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xpxs8" podUID="1520e31e-c4b3-4df3-a8cc-db7b0daf491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 27 20:11:40 crc kubenswrapper[4858]: I0127 20:11:40.095958 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:11:40 crc kubenswrapper[4858]: I0127 20:11:40.096016 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:11:42 crc kubenswrapper[4858]: I0127 20:11:42.530661 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-p68jw" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="registry-server" probeResult="failure" output=< Jan 27 20:11:42 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:11:42 crc kubenswrapper[4858]: > Jan 27 20:11:50 crc kubenswrapper[4858]: I0127 20:11:50.037140 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xpxs8" Jan 27 20:11:50 crc kubenswrapper[4858]: I0127 20:11:50.209905 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:11:50 crc kubenswrapper[4858]: I0127 20:11:50.252212 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:11:53 crc kubenswrapper[4858]: I0127 20:11:53.671825 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5qs" event={"ID":"da279f23-0e34-40de-9b49-f325361ce0ff","Type":"ContainerStarted","Data":"8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974"} Jan 27 20:11:53 crc kubenswrapper[4858]: I0127 20:11:53.673668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerStarted","Data":"2600c0a02c2137bc337c925ed9c2af54b977e6f8540ea2fa73cf5229121fdc13"} Jan 27 20:11:53 crc kubenswrapper[4858]: I0127 20:11:53.678799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hps49" event={"ID":"26fc1461-1071-4f74-9d54-4de6f9a268dc","Type":"ContainerStarted","Data":"2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1"} Jan 27 20:11:53 crc kubenswrapper[4858]: I0127 20:11:53.680994 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r69km" event={"ID":"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a","Type":"ContainerStarted","Data":"239ea1bc70093e8a144d1060556591b2b43840db112b7c5f483786bf05e11380"} Jan 27 20:11:53 crc kubenswrapper[4858]: I0127 20:11:53.691196 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9vrj" event={"ID":"9cdbabda-bda6-438a-a671-0f15b0ad57c0","Type":"ContainerStarted","Data":"c0b481b3a0dd98b88784c0ca344a5ae35de1d5418a11c93df208e78df407073b"} Jan 27 20:11:53 crc kubenswrapper[4858]: I0127 20:11:53.694810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerStarted","Data":"7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5"} Jan 27 20:11:53 crc kubenswrapper[4858]: I0127 20:11:53.700739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnzjf" event={"ID":"ad57fc45-ce61-4d62-adb4-2a655f77e751","Type":"ContainerStarted","Data":"d9d77eb2216a249ab7435d27a44c2f2d153b4cc7b5eb38fa29089edc33a092a7"} Jan 27 20:11:54 crc kubenswrapper[4858]: I0127 20:11:54.747734 4858 generic.go:334] "Generic (PLEG): container finished" podID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerID="239ea1bc70093e8a144d1060556591b2b43840db112b7c5f483786bf05e11380" exitCode=0 Jan 27 20:11:54 crc kubenswrapper[4858]: I0127 20:11:54.747795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r69km" event={"ID":"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a","Type":"ContainerDied","Data":"239ea1bc70093e8a144d1060556591b2b43840db112b7c5f483786bf05e11380"} Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.767456 4858 generic.go:334] "Generic (PLEG): container finished" podID="da279f23-0e34-40de-9b49-f325361ce0ff" containerID="8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974" exitCode=0 Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.767584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5qs" event={"ID":"da279f23-0e34-40de-9b49-f325361ce0ff","Type":"ContainerDied","Data":"8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974"} Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.777147 4858 generic.go:334] "Generic (PLEG): container finished" podID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerID="2600c0a02c2137bc337c925ed9c2af54b977e6f8540ea2fa73cf5229121fdc13" exitCode=0 Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.777332 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerDied","Data":"2600c0a02c2137bc337c925ed9c2af54b977e6f8540ea2fa73cf5229121fdc13"} Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.782712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r69km" event={"ID":"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a","Type":"ContainerStarted","Data":"166e9bcf2bd3b7203815da4c9ae4a319eb0c83f84d51fb9b0c42dfd306c09418"} Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.784359 4858 generic.go:334] "Generic (PLEG): container finished" podID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerID="c0b481b3a0dd98b88784c0ca344a5ae35de1d5418a11c93df208e78df407073b" exitCode=0 Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.784410 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9vrj" event={"ID":"9cdbabda-bda6-438a-a671-0f15b0ad57c0","Type":"ContainerDied","Data":"c0b481b3a0dd98b88784c0ca344a5ae35de1d5418a11c93df208e78df407073b"} Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.786396 4858 generic.go:334] "Generic (PLEG): container finished" podID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerID="7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5" exitCode=0 Jan 27 20:11:55 crc kubenswrapper[4858]: I0127 20:11:55.786424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerDied","Data":"7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5"} Jan 27 20:11:57 crc kubenswrapper[4858]: I0127 20:11:57.810227 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnzjf" event={"ID":"ad57fc45-ce61-4d62-adb4-2a655f77e751","Type":"ContainerDied","Data":"d9d77eb2216a249ab7435d27a44c2f2d153b4cc7b5eb38fa29089edc33a092a7"} Jan 27 20:11:57 crc kubenswrapper[4858]: I0127 20:11:57.810268 4858 generic.go:334] "Generic (PLEG): container finished" podID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerID="d9d77eb2216a249ab7435d27a44c2f2d153b4cc7b5eb38fa29089edc33a092a7" exitCode=0 Jan 27 20:11:57 crc kubenswrapper[4858]: I0127 20:11:57.814121 4858 generic.go:334] "Generic (PLEG): container finished" podID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerID="2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1" exitCode=0 Jan 27 20:11:57 crc kubenswrapper[4858]: I0127 20:11:57.814158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hps49" event={"ID":"26fc1461-1071-4f74-9d54-4de6f9a268dc","Type":"ContainerDied","Data":"2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1"} Jan 27 20:11:57 crc kubenswrapper[4858]: I0127 20:11:57.833917 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r69km" podStartSLOduration=5.415929891 podStartE2EDuration="1m27.833899629s" podCreationTimestamp="2026-01-27 20:10:30 +0000 UTC" firstStartedPulling="2026-01-27 20:10:32.809331061 +0000 UTC m=+177.517146767" lastFinishedPulling="2026-01-27 20:11:55.227300799 +0000 UTC m=+259.935116505" observedRunningTime="2026-01-27 20:11:56.819243163 +0000 UTC m=+261.527058909" watchObservedRunningTime="2026-01-27 20:11:57.833899629 +0000 UTC m=+262.541715335" Jan 27 20:12:01 crc kubenswrapper[4858]: I0127 20:12:01.085006 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:12:01 crc kubenswrapper[4858]: I0127 20:12:01.085499 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:12:01 crc kubenswrapper[4858]: I0127 20:12:01.095269 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" podUID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" containerName="oauth-openshift" containerID="cri-o://71de311a25d593c305f43e78f57265bfbc26c9f7875bb94720272b9da8fad2b9" gracePeriod=15 Jan 27 20:12:01 crc kubenswrapper[4858]: I0127 20:12:01.127214 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.098262 4858 generic.go:334] "Generic (PLEG): container finished" podID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" containerID="71de311a25d593c305f43e78f57265bfbc26c9f7875bb94720272b9da8fad2b9" exitCode=0 Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.098363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" event={"ID":"37cd25e0-a46b-4f44-a271-2d15a2ac9b07","Type":"ContainerDied","Data":"71de311a25d593c305f43e78f57265bfbc26c9f7875bb94720272b9da8fad2b9"} Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.139318 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.352427 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.382382 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-fc667b7f-p64j4"] Jan 27 20:12:02 crc kubenswrapper[4858]: E0127 20:12:02.382831 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1be45ab-a4ca-4afe-a73c-4759229a916e" containerName="pruner" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.382853 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1be45ab-a4ca-4afe-a73c-4759229a916e" containerName="pruner" Jan 27 20:12:02 crc kubenswrapper[4858]: E0127 20:12:02.382866 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" containerName="oauth-openshift" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.382874 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" containerName="oauth-openshift" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.383002 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" containerName="oauth-openshift" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.383025 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1be45ab-a4ca-4afe-a73c-4759229a916e" containerName="pruner" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.383491 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.396732 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fc667b7f-p64j4"] Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474339 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-login\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-cliconfig\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474481 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-router-certs\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474535 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-session\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474582 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-dir\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474631 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zglst\" (UniqueName: \"kubernetes.io/projected/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-kube-api-access-zglst\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-trusted-ca-bundle\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474714 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-error\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-provider-selection\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474747 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-ocp-branding-template\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474800 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-idp-0-file-data\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-policies\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-service-ca\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.474893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-serving-cert\") pod \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\" (UID: \"37cd25e0-a46b-4f44-a271-2d15a2ac9b07\") " Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475075 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475110 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756c4\" (UniqueName: \"kubernetes.io/projected/18b4ebfb-ef47-4893-9f78-d6562b229c0c-kube-api-access-756c4\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475143 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-error\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475252 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-session\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475280 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-login\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-audit-policies\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18b4ebfb-ef47-4893-9f78-d6562b229c0c-audit-dir\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475568 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475632 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.475908 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.476600 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.477021 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.480996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-kube-api-access-zglst" (OuterVolumeSpecName: "kube-api-access-zglst") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "kube-api-access-zglst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.481213 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.486084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.486422 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.486825 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.487034 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.487202 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.487717 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.488039 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "37cd25e0-a46b-4f44-a271-2d15a2ac9b07" (UID: "37cd25e0-a46b-4f44-a271-2d15a2ac9b07"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577533 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18b4ebfb-ef47-4893-9f78-d6562b229c0c-audit-dir\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/18b4ebfb-ef47-4893-9f78-d6562b229c0c-audit-dir\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756c4\" (UniqueName: \"kubernetes.io/projected/18b4ebfb-ef47-4893-9f78-d6562b229c0c-kube-api-access-756c4\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577917 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-error\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577951 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.577989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-session\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578027 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-login\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-audit-policies\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578431 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578451 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578475 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578497 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578517 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578538 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578579 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578598 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zglst\" (UniqueName: \"kubernetes.io/projected/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-kube-api-access-zglst\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578615 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578640 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578659 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578679 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.578698 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/37cd25e0-a46b-4f44-a271-2d15a2ac9b07-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.579658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.579689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-audit-policies\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.580157 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.581856 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.582042 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.582183 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.584213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.584891 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.585224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-login\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.587036 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-system-session\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.587317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-error\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.587365 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/18b4ebfb-ef47-4893-9f78-d6562b229c0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.596465 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756c4\" (UniqueName: \"kubernetes.io/projected/18b4ebfb-ef47-4893-9f78-d6562b229c0c-kube-api-access-756c4\") pod \"oauth-openshift-fc667b7f-p64j4\" (UID: \"18b4ebfb-ef47-4893-9f78-d6562b229c0c\") " pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:02 crc kubenswrapper[4858]: I0127 20:12:02.707510 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.105890 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.105880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mzt2r" event={"ID":"37cd25e0-a46b-4f44-a271-2d15a2ac9b07","Type":"ContainerDied","Data":"c64bf76e77fb5f5c47fec98a664706efc8094ce0f63bfd1d512b013958a2c52e"} Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.106373 4858 scope.go:117] "RemoveContainer" containerID="71de311a25d593c305f43e78f57265bfbc26c9f7875bb94720272b9da8fad2b9" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.136529 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mzt2r"] Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.140238 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mzt2r"] Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.195114 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fc667b7f-p64j4"] Jan 27 20:12:03 crc kubenswrapper[4858]: W0127 20:12:03.201484 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18b4ebfb_ef47_4893_9f78_d6562b229c0c.slice/crio-6a60b3edb0fd147d07729b0cc883ee9762c040f3881b18a4c9b3736b98b249a9 WatchSource:0}: Error finding container 6a60b3edb0fd147d07729b0cc883ee9762c040f3881b18a4c9b3736b98b249a9: Status 404 returned error can't find the container with id 6a60b3edb0fd147d07729b0cc883ee9762c040f3881b18a4c9b3736b98b249a9 Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.712103 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.713133 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.713253 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.713412 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165" gracePeriod=15 Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.713484 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78" gracePeriod=15 Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.713536 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730" gracePeriod=15 Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.713595 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47" gracePeriod=15 Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.713629 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3" gracePeriod=15 Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714336 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.714630 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714665 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.714691 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714698 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.714711 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714719 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.714726 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714731 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.714740 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714746 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.714755 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714761 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.714774 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714779 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714881 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714906 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714918 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714926 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714933 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714941 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.714948 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.715044 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.715050 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.794995 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.795211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.795300 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.795334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.795363 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.795388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.795433 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.795479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.896750 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.896832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.896891 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.896912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.896926 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.896950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.896974 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897098 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897227 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897247 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: I0127 20:12:03.897267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:03 crc kubenswrapper[4858]: E0127 20:12:03.951156 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.56:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2r5qs.188eaf8d0d389114 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2r5qs,UID:da279f23-0e34-40de-9b49-f325361ce0ff,APIVersion:v1,ResourceVersion:28223,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 20:12:03.950244116 +0000 UTC m=+268.658059822,LastTimestamp:2026-01-27 20:12:03.950244116 +0000 UTC m=+268.658059822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 20:12:04 crc kubenswrapper[4858]: I0127 20:12:04.093736 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37cd25e0-a46b-4f44-a271-2d15a2ac9b07" path="/var/lib/kubelet/pods/37cd25e0-a46b-4f44-a271-2d15a2ac9b07/volumes" Jan 27 20:12:04 crc kubenswrapper[4858]: I0127 20:12:04.112899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerStarted","Data":"6a60b3edb0fd147d07729b0cc883ee9762c040f3881b18a4c9b3736b98b249a9"} Jan 27 20:12:04 crc kubenswrapper[4858]: I0127 20:12:04.118810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5qs" event={"ID":"da279f23-0e34-40de-9b49-f325361ce0ff","Type":"ContainerStarted","Data":"3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4"} Jan 27 20:12:05 crc kubenswrapper[4858]: I0127 20:12:05.125215 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerStarted","Data":"c07b67a2719dceaadf86eb296abc45cf3b2cba2073759c01a1cc720ce02aa1db"} Jan 27 20:12:05 crc kubenswrapper[4858]: I0127 20:12:05.127366 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 20:12:05 crc kubenswrapper[4858]: I0127 20:12:05.128658 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 20:12:05 crc kubenswrapper[4858]: I0127 20:12:05.129364 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3" exitCode=2 Jan 27 20:12:05 crc kubenswrapper[4858]: I0127 20:12:05.131280 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9vrj" event={"ID":"9cdbabda-bda6-438a-a671-0f15b0ad57c0","Type":"ContainerStarted","Data":"0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab"} Jan 27 20:12:05 crc kubenswrapper[4858]: I0127 20:12:05.132064 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.074900 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.076212 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.139385 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.141476 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.142288 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78" exitCode=0 Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.142325 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730" exitCode=0 Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.142337 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47" exitCode=0 Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.142409 4858 scope.go:117] "RemoveContainer" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.144467 4858 generic.go:334] "Generic (PLEG): container finished" podID="e50de7cf-1829-431e-8655-0e948b1695f7" containerID="106fbbcb006d9526b110f37e092977076ed9a09ff41aa2933c387bcdf206f31c" exitCode=0 Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.144805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e50de7cf-1829-431e-8655-0e948b1695f7","Type":"ContainerDied","Data":"106fbbcb006d9526b110f37e092977076ed9a09ff41aa2933c387bcdf206f31c"} Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.145427 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.148442 4858 patch_prober.go:28] interesting pod/oauth-openshift-fc667b7f-p64j4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.148521 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.148567 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.149180 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.149990 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.150483 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.150989 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.151238 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: I0127 20:12:06.152505 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:06 crc kubenswrapper[4858]: E0127 20:12:06.497524 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.56:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2r5qs.188eaf8d0d389114 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2r5qs,UID:da279f23-0e34-40de-9b49-f325361ce0ff,APIVersion:v1,ResourceVersion:28223,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 20:12:03.950244116 +0000 UTC m=+268.658059822,LastTimestamp:2026-01-27 20:12:03.950244116 +0000 UTC m=+268.658059822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.150722 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/0.log" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.150773 4858 generic.go:334] "Generic (PLEG): container finished" podID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerID="c07b67a2719dceaadf86eb296abc45cf3b2cba2073759c01a1cc720ce02aa1db" exitCode=255 Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.150827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerDied","Data":"c07b67a2719dceaadf86eb296abc45cf3b2cba2073759c01a1cc720ce02aa1db"} Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.151277 4858 scope.go:117] "RemoveContainer" containerID="c07b67a2719dceaadf86eb296abc45cf3b2cba2073759c01a1cc720ce02aa1db" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.151534 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.151839 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.152061 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.152252 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.953516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:12:07 crc kubenswrapper[4858]: I0127 20:12:07.953994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:12:07 crc kubenswrapper[4858]: W0127 20:12:07.954464 4858 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:07 crc kubenswrapper[4858]: E0127 20:12:07.954632 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:07 crc kubenswrapper[4858]: W0127 20:12:07.954603 4858 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:07 crc kubenswrapper[4858]: E0127 20:12:07.954712 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:08 crc kubenswrapper[4858]: I0127 20:12:08.055569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:12:08 crc kubenswrapper[4858]: I0127 20:12:08.055779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:12:08 crc kubenswrapper[4858]: W0127 20:12:08.056976 4858 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:08 crc kubenswrapper[4858]: E0127 20:12:08.057071 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:08 crc kubenswrapper[4858]: E0127 20:12:08.080316 4858 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.56:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" volumeName="registry-storage" Jan 27 20:12:08 crc kubenswrapper[4858]: I0127 20:12:08.167196 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 20:12:08 crc kubenswrapper[4858]: I0127 20:12:08.168622 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165" exitCode=0 Jan 27 20:12:08 crc kubenswrapper[4858]: E0127 20:12:08.748610 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.56:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:08 crc kubenswrapper[4858]: I0127 20:12:08.749259 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:08 crc kubenswrapper[4858]: E0127 20:12:08.954624 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 20:12:08 crc kubenswrapper[4858]: E0127 20:12:08.954724 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:14:10.954703324 +0000 UTC m=+395.662519030 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Jan 27 20:12:08 crc kubenswrapper[4858]: E0127 20:12:08.957150 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:08 crc kubenswrapper[4858]: E0127 20:12:08.957308 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 20:14:10.9572807 +0000 UTC m=+395.665096406 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:08 crc kubenswrapper[4858]: I0127 20:12:08.973585 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:12:08 crc kubenswrapper[4858]: I0127 20:12:08.974057 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.019942 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.021316 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.022680 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.022962 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.023136 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: E0127 20:12:09.056794 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:09 crc kubenswrapper[4858]: E0127 20:12:09.056875 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:09 crc kubenswrapper[4858]: W0127 20:12:09.057642 4858 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:09 crc kubenswrapper[4858]: E0127 20:12:09.057770 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.225105 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.225624 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.225935 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.226239 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.226500 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.238594 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.238638 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.292904 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.293492 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.293922 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.294154 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:09 crc kubenswrapper[4858]: I0127 20:12:09.294346 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.057190 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.057241 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.057201 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.057332 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 20:14:12.05730177 +0000 UTC m=+396.765117486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.057345 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.057398 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 20:14:12.057384712 +0000 UTC m=+396.765200418 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Jan 27 20:12:10 crc kubenswrapper[4858]: I0127 20:12:10.231781 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:12:10 crc kubenswrapper[4858]: I0127 20:12:10.232378 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:10 crc kubenswrapper[4858]: I0127 20:12:10.233317 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:10 crc kubenswrapper[4858]: I0127 20:12:10.233811 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:10 crc kubenswrapper[4858]: I0127 20:12:10.235094 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:10 crc kubenswrapper[4858]: W0127 20:12:10.265937 4858 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.266058 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:10 crc kubenswrapper[4858]: W0127 20:12:10.315933 4858 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.316027 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:10 crc kubenswrapper[4858]: W0127 20:12:10.890611 4858 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.891086 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:10 crc kubenswrapper[4858]: W0127 20:12:10.921350 4858 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:10 crc kubenswrapper[4858]: E0127 20:12:10.921604 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:12.642090 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:12.642373 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:12.642826 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:12.643098 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:12.643351 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.643463 4858 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:12.643747 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="200ms" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.707923 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.731195 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.731939 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.732597 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.732884 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.733242 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.739866 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.740784 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.741297 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.741675 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.741963 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.742203 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.742405 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.834876 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-var-lock\") pod \"e50de7cf-1829-431e-8655-0e948b1695f7\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.834959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.834992 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.834985 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-var-lock" (OuterVolumeSpecName: "var-lock") pod "e50de7cf-1829-431e-8655-0e948b1695f7" (UID: "e50de7cf-1829-431e-8655-0e948b1695f7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835071 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e50de7cf-1829-431e-8655-0e948b1695f7-kube-api-access\") pod \"e50de7cf-1829-431e-8655-0e948b1695f7\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835081 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-kubelet-dir\") pod \"e50de7cf-1829-431e-8655-0e948b1695f7\" (UID: \"e50de7cf-1829-431e-8655-0e948b1695f7\") " Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835151 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e50de7cf-1829-431e-8655-0e948b1695f7" (UID: "e50de7cf-1829-431e-8655-0e948b1695f7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835234 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835379 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835390 4858 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835398 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835405 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.835416 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e50de7cf-1829-431e-8655-0e948b1695f7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.841884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50de7cf-1829-431e-8655-0e948b1695f7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e50de7cf-1829-431e-8655-0e948b1695f7" (UID: "e50de7cf-1829-431e-8655-0e948b1695f7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:12.844473 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="400ms" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:12.937689 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e50de7cf-1829-431e-8655-0e948b1695f7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.207522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e50de7cf-1829-431e-8655-0e948b1695f7","Type":"ContainerDied","Data":"ce509a6cf06deb01e804fa998673d6ffbc71838fd8d9634015af8dfbc215ba72"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.207577 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce509a6cf06deb01e804fa998673d6ffbc71838fd8d9634015af8dfbc215ba72" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.207608 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.210704 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.211579 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.225431 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.225964 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.226284 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.226577 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.226819 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.233568 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.233893 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.234232 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.234519 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:13.234771 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:13.245434 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="800ms" Jan 27 20:12:21 crc kubenswrapper[4858]: W0127 20:12:13.574235 4858 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:13.574526 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:14.045927 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="1.6s" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:14.084716 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:14.761902 4858 scope.go:117] "RemoveContainer" containerID="f9cc60fa5e1dbe5999adbcf59a2ec494a9595024f9fa6a7bdd1f41c389c50b78" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:15.102251 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:15.107651 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:15.230164 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:15.647486 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="3.2s" Jan 27 20:12:21 crc kubenswrapper[4858]: W0127 20:12:15.882952 4858 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:15.883090 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:16.076383 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:16.077141 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:16.077939 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:16.078306 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: W0127 20:12:16.087243 4858 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:16.087340 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:16.093223 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 20:12:21 crc kubenswrapper[4858]: W0127 20:12:16.473232 4858 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27199": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:16.473364 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27199\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:16.498976 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.56:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-2r5qs.188eaf8d0d389114 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-2r5qs,UID:da279f23-0e34-40de-9b49-f325361ce0ff,APIVersion:v1,ResourceVersion:28223,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 20:12:03.950244116 +0000 UTC m=+268.658059822,LastTimestamp:2026-01-27 20:12:03.950244116 +0000 UTC m=+268.658059822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.070909 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.071968 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.072762 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.073801 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.074449 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.090823 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.090879 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:17.091859 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.092885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.246163 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.246213 4858 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5" exitCode=1 Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.246247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.246760 4858 scope.go:117] "RemoveContainer" containerID="e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.247048 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.247716 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.248008 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.248247 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.248448 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:17.312072 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:18.498902 4858 scope.go:117] "RemoveContainer" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:18.499933 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\": container with ID starting with e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69 not found: ID does not exist" containerID="e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:18.499969 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69"} err="failed to get container status \"e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\": rpc error: code = NotFound desc = could not find container \"e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69\": container with ID starting with e5954e1f9d7a63f131d0f079bf95bf60ac95fcb86a07edb177e8cfe86dadba69 not found: ID does not exist" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:18.499999 4858 scope.go:117] "RemoveContainer" containerID="ddb3814c0c0231db69bc96813d65e5ee6f73df60be10f5b1da29ec93ef9c5730" Jan 27 20:12:21 crc kubenswrapper[4858]: W0127 20:12:18.527510 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-bf16acbeed42d805b2d458d80832862adb2784671a7034af6fc0ad3e60f9bd1e WatchSource:0}: Error finding container bf16acbeed42d805b2d458d80832862adb2784671a7034af6fc0ad3e60f9bd1e: Status 404 returned error can't find the container with id bf16acbeed42d805b2d458d80832862adb2784671a7034af6fc0ad3e60f9bd1e Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:18.542154 4858 scope.go:117] "RemoveContainer" containerID="b4f5ed5ae020900c3cf6f756702c46805d89a0856a239a8ba816946ebf340f47" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:18.594225 4858 scope.go:117] "RemoveContainer" containerID="4d7f1eda6df16a83ca4af0037889f9065a903d00e3ad073bc06068a249d425b3" Jan 27 20:12:21 crc kubenswrapper[4858]: W0127 20:12:18.598605 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e5ef55bb67b6118342cb0d12e6f96b3fc56f8abf229334ecd34328f566b183e5 WatchSource:0}: Error finding container e5ef55bb67b6118342cb0d12e6f96b3fc56f8abf229334ecd34328f566b183e5: Status 404 returned error can't find the container with id e5ef55bb67b6118342cb0d12e6f96b3fc56f8abf229334ecd34328f566b183e5 Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:18.632750 4858 scope.go:117] "RemoveContainer" containerID="d0be4bbf9cd815358e8d83bab131649c4a8ebe45c4bc2d3850cedcae0daac165" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:18.705892 4858 scope.go:117] "RemoveContainer" containerID="50e28aa5cf086349bcc75241f9e0d97f86f9503b63ee9938984654e50327ffce" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:18.848995 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.56:6443: connect: connection refused" interval="6.4s" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:19.018614 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:19.260877 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5ef55bb67b6118342cb0d12e6f96b3fc56f8abf229334ecd34328f566b183e5"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:19.262159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bf16acbeed42d805b2d458d80832862adb2784671a7034af6fc0ad3e60f9bd1e"} Jan 27 20:12:21 crc kubenswrapper[4858]: W0127 20:12:20.101645 4858 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201": dial tcp 38.129.56.56:6443: connect: connection refused Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:20.101734 4858 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27201\": dial tcp 38.129.56.56:6443: connect: connection refused" logger="UnhandledError" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.269840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnzjf" event={"ID":"ad57fc45-ce61-4d62-adb4-2a655f77e751","Type":"ContainerStarted","Data":"b66b491bddb3ac20e20722e0414ab4c8df70231c97a5895147e3062d585856c7"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.270868 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.271185 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.271588 4858 status_manager.go:851] "Failed to get status for pod" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" pod="openshift-marketplace/redhat-operators-gnzjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gnzjf\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.271830 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.272280 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.272674 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.272824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerStarted","Data":"b24938f7a772f87c3143bc4100b6b6909a156d798e437308562fc3fbaa1da07c"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.273327 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.273570 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.273738 4858 status_manager.go:851] "Failed to get status for pod" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" pod="openshift-marketplace/certified-operators-j4jtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4jtm\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.273933 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.274212 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.274436 4858 status_manager.go:851] "Failed to get status for pod" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" pod="openshift-marketplace/redhat-operators-gnzjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gnzjf\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.274675 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.274979 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hps49" event={"ID":"26fc1461-1071-4f74-9d54-4de6f9a268dc","Type":"ContainerStarted","Data":"041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.275741 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.275975 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.276213 4858 status_manager.go:851] "Failed to get status for pod" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" pod="openshift-marketplace/certified-operators-j4jtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4jtm\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.276702 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.276959 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.277136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.277216 4858 status_manager.go:851] "Failed to get status for pod" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" pod="openshift-marketplace/redhat-operators-gnzjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gnzjf\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.277475 4858 status_manager.go:851] "Failed to get status for pod" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" pod="openshift-marketplace/redhat-operators-hps49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hps49\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.277742 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.280121 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.280192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"15f23e9dd88779c49a1c0a606155ff3f7b5cbc4d1c4b401ce54e9e653054bafd"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.281899 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/0.log" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.281956 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerStarted","Data":"a08501913d15f383a1397a8e6048c9b3e1ea2c6284893d2eb638f3c32a763b2e"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.282954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f6a3a469708a37655e2c15d2354a2caebc0c230fb8fa9cb63678db3324a1d4d8"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.284397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerStarted","Data":"d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.284944 4858 status_manager.go:851] "Failed to get status for pod" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" pod="openshift-marketplace/redhat-operators-hps49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hps49\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.285219 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.285569 4858 status_manager.go:851] "Failed to get status for pod" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" pod="openshift-marketplace/community-operators-rl5k9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rl5k9\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.285976 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.286224 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.286493 4858 status_manager.go:851] "Failed to get status for pod" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" pod="openshift-marketplace/certified-operators-j4jtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4jtm\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.286786 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.287073 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:20.287307 4858 status_manager.go:851] "Failed to get status for pod" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" pod="openshift-marketplace/redhat-operators-gnzjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gnzjf\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.277814 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.278148 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.292848 4858 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f6a3a469708a37655e2c15d2354a2caebc0c230fb8fa9cb63678db3324a1d4d8" exitCode=0 Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.292902 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f6a3a469708a37655e2c15d2354a2caebc0c230fb8fa9cb63678db3324a1d4d8"} Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.293292 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.293310 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.293438 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:21.293696 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: E0127 20:12:21.294388 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.56:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.294423 4858 patch_prober.go:28] interesting pod/oauth-openshift-fc667b7f-p64j4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.294456 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.293751 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.294700 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.294944 4858 status_manager.go:851] "Failed to get status for pod" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" pod="openshift-marketplace/certified-operators-j4jtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4jtm\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.295215 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.295363 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.295498 4858 status_manager.go:851] "Failed to get status for pod" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" pod="openshift-marketplace/redhat-operators-gnzjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gnzjf\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.295745 4858 status_manager.go:851] "Failed to get status for pod" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" pod="openshift-marketplace/redhat-operators-hps49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hps49\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.295947 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.296107 4858 status_manager.go:851] "Failed to get status for pod" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" pod="openshift-marketplace/community-operators-rl5k9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rl5k9\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.296328 4858 status_manager.go:851] "Failed to get status for pod" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" pod="openshift-marketplace/community-operators-rl5k9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rl5k9\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.296496 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.296698 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.296914 4858 status_manager.go:851] "Failed to get status for pod" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" pod="openshift-marketplace/certified-operators-j4jtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4jtm\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.297172 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.297433 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.297660 4858 status_manager.go:851] "Failed to get status for pod" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" pod="openshift-marketplace/redhat-operators-gnzjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gnzjf\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.297891 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.298182 4858 status_manager.go:851] "Failed to get status for pod" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" pod="openshift-marketplace/redhat-operators-hps49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hps49\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.624635 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:12:21 crc kubenswrapper[4858]: I0127 20:12:21.624731 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:22.300144 4858 patch_prober.go:28] interesting pod/oauth-openshift-fc667b7f-p64j4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:22.300602 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:22.316872 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gnzjf" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="registry-server" probeResult="failure" output=< Jan 27 20:12:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:12:23 crc kubenswrapper[4858]: > Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:22.688615 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hps49" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="registry-server" probeResult="failure" output=< Jan 27 20:12:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:12:23 crc kubenswrapper[4858]: > Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:22.709136 4858 patch_prober.go:28] interesting pod/oauth-openshift-fc667b7f-p64j4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:22.709261 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.305707 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/1.log" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.306260 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/0.log" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.306307 4858 generic.go:334] "Generic (PLEG): container finished" podID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerID="a08501913d15f383a1397a8e6048c9b3e1ea2c6284893d2eb638f3c32a763b2e" exitCode=255 Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.306340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerDied","Data":"a08501913d15f383a1397a8e6048c9b3e1ea2c6284893d2eb638f3c32a763b2e"} Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.306377 4858 scope.go:117] "RemoveContainer" containerID="c07b67a2719dceaadf86eb296abc45cf3b2cba2073759c01a1cc720ce02aa1db" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.306964 4858 scope.go:117] "RemoveContainer" containerID="a08501913d15f383a1397a8e6048c9b3e1ea2c6284893d2eb638f3c32a763b2e" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.307159 4858 status_manager.go:851] "Failed to get status for pod" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" pod="openshift-marketplace/community-operators-rl5k9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rl5k9\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: E0127 20:12:23.307308 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.307577 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.307867 4858 status_manager.go:851] "Failed to get status for pod" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" pod="openshift-marketplace/community-operators-b9vrj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-b9vrj\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.308165 4858 status_manager.go:851] "Failed to get status for pod" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" pod="openshift-marketplace/certified-operators-j4jtm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j4jtm\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.314727 4858 status_manager.go:851] "Failed to get status for pod" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.315309 4858 status_manager.go:851] "Failed to get status for pod" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" pod="openshift-marketplace/certified-operators-2r5qs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2r5qs\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.315877 4858 status_manager.go:851] "Failed to get status for pod" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" pod="openshift-marketplace/redhat-operators-gnzjf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gnzjf\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.316172 4858 status_manager.go:851] "Failed to get status for pod" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" pod="openshift-marketplace/redhat-operators-hps49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hps49\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:23 crc kubenswrapper[4858]: I0127 20:12:23.316483 4858 status_manager.go:851] "Failed to get status for pod" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-fc667b7f-p64j4\": dial tcp 38.129.56.56:6443: connect: connection refused" Jan 27 20:12:24 crc kubenswrapper[4858]: I0127 20:12:24.313092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78ba945c6d68bf6d760255e6a554dc5aff1cae643206ef1310b8a61b6dc1982a"} Jan 27 20:12:25 crc kubenswrapper[4858]: I0127 20:12:25.320250 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/1.log" Jan 27 20:12:26 crc kubenswrapper[4858]: I0127 20:12:26.334880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6eb229a57d07647dd5f091d6b2d692eca6691a342bb808cdc7777206e1d4ffe7"} Jan 27 20:12:26 crc kubenswrapper[4858]: I0127 20:12:26.335231 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:26 crc kubenswrapper[4858]: I0127 20:12:26.335245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"11e54866ac4229c2b45e88e1c73959c61de586c5fd51f066f5f0125251f80c1f"} Jan 27 20:12:26 crc kubenswrapper[4858]: I0127 20:12:26.335261 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:26 crc kubenswrapper[4858]: I0127 20:12:26.335267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d36189ca90020e80e400dd459f51637058f753ddcb29133516d08d7ebdc78f3e"} Jan 27 20:12:26 crc kubenswrapper[4858]: I0127 20:12:26.335326 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:26 crc kubenswrapper[4858]: I0127 20:12:26.335342 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5670db4e69256359cedafb26fb008bf48663e5eeb1fd09753217052d781d9558"} Jan 27 20:12:27 crc kubenswrapper[4858]: I0127 20:12:27.070280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:12:27 crc kubenswrapper[4858]: I0127 20:12:27.093600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:27 crc kubenswrapper[4858]: I0127 20:12:27.093877 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:27 crc kubenswrapper[4858]: I0127 20:12:27.099634 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:27 crc kubenswrapper[4858]: I0127 20:12:27.312061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:12:27 crc kubenswrapper[4858]: I0127 20:12:27.852041 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 20:12:28 crc kubenswrapper[4858]: I0127 20:12:28.439032 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:12:28 crc kubenswrapper[4858]: I0127 20:12:28.440117 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:12:28 crc kubenswrapper[4858]: I0127 20:12:28.481134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.018062 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.018336 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.018480 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.070987 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.071025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.232649 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.232697 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.276773 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.392179 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.396676 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:12:29 crc kubenswrapper[4858]: I0127 20:12:29.997856 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 20:12:30 crc kubenswrapper[4858]: I0127 20:12:30.693444 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.314335 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.344700 4858 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.354666 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.361487 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.361519 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.366602 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.375666 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0a88a361-e1fa-4902-b24f-c34183b69e8e" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.664748 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:12:31 crc kubenswrapper[4858]: I0127 20:12:31.704279 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:12:32 crc kubenswrapper[4858]: I0127 20:12:32.366758 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:32 crc kubenswrapper[4858]: I0127 20:12:32.366793 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="88aaef03-76aa-447e-98ee-ca909788fbdd" Jan 27 20:12:32 crc kubenswrapper[4858]: I0127 20:12:32.708470 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:32 crc kubenswrapper[4858]: I0127 20:12:32.709674 4858 scope.go:117] "RemoveContainer" containerID="a08501913d15f383a1397a8e6048c9b3e1ea2c6284893d2eb638f3c32a763b2e" Jan 27 20:12:33 crc kubenswrapper[4858]: I0127 20:12:33.374315 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/1.log" Jan 27 20:12:33 crc kubenswrapper[4858]: I0127 20:12:33.375002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerStarted","Data":"b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09"} Jan 27 20:12:33 crc kubenswrapper[4858]: I0127 20:12:33.375410 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:33 crc kubenswrapper[4858]: I0127 20:12:33.829429 4858 patch_prober.go:28] interesting pod/oauth-openshift-fc667b7f-p64j4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:34276->10.217.0.56:6443: read: connection reset by peer" start-of-body= Jan 27 20:12:33 crc kubenswrapper[4858]: I0127 20:12:33.830021 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:34276->10.217.0.56:6443: read: connection reset by peer" Jan 27 20:12:34 crc kubenswrapper[4858]: I0127 20:12:34.385449 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/2.log" Jan 27 20:12:34 crc kubenswrapper[4858]: I0127 20:12:34.386181 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/1.log" Jan 27 20:12:34 crc kubenswrapper[4858]: I0127 20:12:34.386262 4858 generic.go:334] "Generic (PLEG): container finished" podID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerID="b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09" exitCode=255 Jan 27 20:12:34 crc kubenswrapper[4858]: I0127 20:12:34.386299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerDied","Data":"b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09"} Jan 27 20:12:34 crc kubenswrapper[4858]: I0127 20:12:34.386353 4858 scope.go:117] "RemoveContainer" containerID="a08501913d15f383a1397a8e6048c9b3e1ea2c6284893d2eb638f3c32a763b2e" Jan 27 20:12:34 crc kubenswrapper[4858]: I0127 20:12:34.387097 4858 scope.go:117] "RemoveContainer" containerID="b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09" Jan 27 20:12:34 crc kubenswrapper[4858]: E0127 20:12:34.387471 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:12:35 crc kubenswrapper[4858]: I0127 20:12:35.394905 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/2.log" Jan 27 20:12:35 crc kubenswrapper[4858]: I0127 20:12:35.396321 4858 scope.go:117] "RemoveContainer" containerID="b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09" Jan 27 20:12:35 crc kubenswrapper[4858]: E0127 20:12:35.396668 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:12:35 crc kubenswrapper[4858]: I0127 20:12:35.835256 4858 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 20:12:35 crc kubenswrapper[4858]: I0127 20:12:35.991050 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 20:12:36 crc kubenswrapper[4858]: I0127 20:12:36.099345 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0a88a361-e1fa-4902-b24f-c34183b69e8e" Jan 27 20:12:39 crc kubenswrapper[4858]: I0127 20:12:39.018647 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 20:12:39 crc kubenswrapper[4858]: I0127 20:12:39.019071 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 20:12:40 crc kubenswrapper[4858]: I0127 20:12:40.116646 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 20:12:40 crc kubenswrapper[4858]: I0127 20:12:40.116684 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 20:12:40 crc kubenswrapper[4858]: I0127 20:12:40.808064 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 20:12:41 crc kubenswrapper[4858]: I0127 20:12:41.001259 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 20:12:41 crc kubenswrapper[4858]: I0127 20:12:41.062011 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 20:12:41 crc kubenswrapper[4858]: I0127 20:12:41.770247 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 20:12:42 crc kubenswrapper[4858]: I0127 20:12:42.708733 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:42 crc kubenswrapper[4858]: I0127 20:12:42.709584 4858 scope.go:117] "RemoveContainer" containerID="b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09" Jan 27 20:12:42 crc kubenswrapper[4858]: E0127 20:12:42.709868 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:12:42 crc kubenswrapper[4858]: I0127 20:12:42.765592 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 20:12:42 crc kubenswrapper[4858]: I0127 20:12:42.782767 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 20:12:42 crc kubenswrapper[4858]: I0127 20:12:42.927857 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.058194 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.071400 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.461958 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.546726 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.558194 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.582779 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.615199 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.639411 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.787415 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 20:12:43 crc kubenswrapper[4858]: I0127 20:12:43.986699 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.046129 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.135783 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.250999 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.331200 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.507107 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.521416 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.560208 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.603444 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.640371 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.722228 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 20:12:44 crc kubenswrapper[4858]: I0127 20:12:44.753498 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.029419 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.164957 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.200508 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.339119 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.360079 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.477466 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.493362 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.513828 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.528526 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.678705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.734893 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.810697 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 20:12:45 crc kubenswrapper[4858]: I0127 20:12:45.827410 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.049467 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.049811 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b9vrj" podStartSLOduration=48.007184391 podStartE2EDuration="2m19.049797177s" podCreationTimestamp="2026-01-27 20:10:27 +0000 UTC" firstStartedPulling="2026-01-27 20:10:31.773455044 +0000 UTC m=+176.481270750" lastFinishedPulling="2026-01-27 20:12:02.81606783 +0000 UTC m=+267.523883536" observedRunningTime="2026-01-27 20:12:31.172807763 +0000 UTC m=+295.880623479" watchObservedRunningTime="2026-01-27 20:12:46.049797177 +0000 UTC m=+310.757612883" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.050866 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gnzjf" podStartSLOduration=34.932239801 podStartE2EDuration="2m16.050857428s" podCreationTimestamp="2026-01-27 20:10:30 +0000 UTC" firstStartedPulling="2026-01-27 20:10:32.803477322 +0000 UTC m=+177.511293028" lastFinishedPulling="2026-01-27 20:12:13.922094949 +0000 UTC m=+278.629910655" observedRunningTime="2026-01-27 20:12:31.264723761 +0000 UTC m=+295.972539487" watchObservedRunningTime="2026-01-27 20:12:46.050857428 +0000 UTC m=+310.758673134" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.051487 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4jtm" podStartSLOduration=32.030916574 podStartE2EDuration="2m19.051482506s" podCreationTimestamp="2026-01-27 20:10:27 +0000 UTC" firstStartedPulling="2026-01-27 20:10:29.534084531 +0000 UTC m=+174.241900237" lastFinishedPulling="2026-01-27 20:12:16.554650463 +0000 UTC m=+281.262466169" observedRunningTime="2026-01-27 20:12:31.189577987 +0000 UTC m=+295.897393713" watchObservedRunningTime="2026-01-27 20:12:46.051482506 +0000 UTC m=+310.759298212" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.052120 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2r5qs" podStartSLOduration=47.147561221 podStartE2EDuration="2m18.052115395s" podCreationTimestamp="2026-01-27 20:10:28 +0000 UTC" firstStartedPulling="2026-01-27 20:10:30.674894831 +0000 UTC m=+175.382710527" lastFinishedPulling="2026-01-27 20:12:01.579448995 +0000 UTC m=+266.287264701" observedRunningTime="2026-01-27 20:12:31.248177433 +0000 UTC m=+295.955993149" watchObservedRunningTime="2026-01-27 20:12:46.052115395 +0000 UTC m=+310.759931091" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.052480 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rl5k9" podStartSLOduration=30.256313411 podStartE2EDuration="2m18.052475435s" podCreationTimestamp="2026-01-27 20:10:28 +0000 UTC" firstStartedPulling="2026-01-27 20:10:30.702732323 +0000 UTC m=+175.410548029" lastFinishedPulling="2026-01-27 20:12:18.498894347 +0000 UTC m=+283.206710053" observedRunningTime="2026-01-27 20:12:31.113462554 +0000 UTC m=+295.821278260" watchObservedRunningTime="2026-01-27 20:12:46.052475435 +0000 UTC m=+310.760291141" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.053490 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hps49" podStartSLOduration=29.36513647 podStartE2EDuration="2m15.053486025s" podCreationTimestamp="2026-01-27 20:10:31 +0000 UTC" firstStartedPulling="2026-01-27 20:10:32.817406344 +0000 UTC m=+177.525222050" lastFinishedPulling="2026-01-27 20:12:18.505755899 +0000 UTC m=+283.213571605" observedRunningTime="2026-01-27 20:12:31.280718592 +0000 UTC m=+295.988534318" watchObservedRunningTime="2026-01-27 20:12:46.053486025 +0000 UTC m=+310.761301731" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.054202 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.054246 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.057950 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.071477 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.071462145 podStartE2EDuration="15.071462145s" podCreationTimestamp="2026-01-27 20:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:12:46.070601279 +0000 UTC m=+310.778417005" watchObservedRunningTime="2026-01-27 20:12:46.071462145 +0000 UTC m=+310.779277851" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.352247 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.755373 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.755674 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 20:12:46 crc kubenswrapper[4858]: I0127 20:12:46.782572 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.466404 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.485848 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.647447 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.648868 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.650131 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.681323 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.871841 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 20:12:47 crc kubenswrapper[4858]: I0127 20:12:47.912573 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.069168 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.084162 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.101486 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.220864 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.226048 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.361536 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.504537 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.580469 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.610844 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.618893 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.707720 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.794530 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.803775 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.805510 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.887631 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 20:12:48 crc kubenswrapper[4858]: I0127 20:12:48.947348 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.018049 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.018124 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.018175 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.018871 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"15f23e9dd88779c49a1c0a606155ff3f7b5cbc4d1c4b401ce54e9e653054bafd"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.018978 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://15f23e9dd88779c49a1c0a606155ff3f7b5cbc4d1c4b401ce54e9e653054bafd" gracePeriod=30 Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.350454 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.387411 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.392526 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.495348 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.542881 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.802854 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.972454 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 20:12:49 crc kubenswrapper[4858]: I0127 20:12:49.980041 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.007499 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.026676 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.065819 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.104372 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.308029 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.594458 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.602808 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.769082 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.770997 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.905342 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 20:12:50 crc kubenswrapper[4858]: I0127 20:12:50.912203 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 20:12:51 crc kubenswrapper[4858]: I0127 20:12:51.079380 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 20:12:51 crc kubenswrapper[4858]: I0127 20:12:51.297062 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 20:12:51 crc kubenswrapper[4858]: I0127 20:12:51.304628 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 20:12:51 crc kubenswrapper[4858]: I0127 20:12:51.547973 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 20:12:51 crc kubenswrapper[4858]: I0127 20:12:51.959204 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.015425 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.016287 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.084292 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.117017 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.192718 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.214301 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.285870 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.305603 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.357205 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.376274 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.620058 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.632248 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.659263 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.800721 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.800977 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2" gracePeriod=5 Jan 27 20:12:52 crc kubenswrapper[4858]: I0127 20:12:52.979102 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.056397 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.069664 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.141183 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.163864 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.399954 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.428263 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.446891 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.708443 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.839997 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 20:12:53 crc kubenswrapper[4858]: I0127 20:12:53.874451 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.152526 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.163680 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.168441 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.244348 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.359982 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.396329 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.406805 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.681358 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.748627 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.907836 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.919825 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.937286 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 20:12:54 crc kubenswrapper[4858]: I0127 20:12:54.982477 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 20:12:55 crc kubenswrapper[4858]: I0127 20:12:55.081655 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 20:12:55 crc kubenswrapper[4858]: I0127 20:12:55.235774 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 20:12:55 crc kubenswrapper[4858]: I0127 20:12:55.303302 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 20:12:55 crc kubenswrapper[4858]: I0127 20:12:55.629722 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.074360 4858 scope.go:117] "RemoveContainer" containerID="b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09" Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.323583 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.524113 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/2.log" Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.524168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerStarted","Data":"48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508"} Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.526496 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.553464 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podStartSLOduration=81.55344325 podStartE2EDuration="1m21.55344325s" podCreationTimestamp="2026-01-27 20:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:12:33.398277781 +0000 UTC m=+298.106093497" watchObservedRunningTime="2026-01-27 20:12:56.55344325 +0000 UTC m=+321.261258966" Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.691745 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.822622 4858 patch_prober.go:28] interesting pod/oauth-openshift-fc667b7f-p64j4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:49952->10.217.0.56:6443: read: connection reset by peer" start-of-body= Jan 27 20:12:56 crc kubenswrapper[4858]: I0127 20:12:56.822734 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:49952->10.217.0.56:6443: read: connection reset by peer" Jan 27 20:12:57 crc kubenswrapper[4858]: I0127 20:12:57.537679 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/3.log" Jan 27 20:12:57 crc kubenswrapper[4858]: I0127 20:12:57.538648 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/2.log" Jan 27 20:12:57 crc kubenswrapper[4858]: I0127 20:12:57.538731 4858 generic.go:334] "Generic (PLEG): container finished" podID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" containerID="48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508" exitCode=255 Jan 27 20:12:57 crc kubenswrapper[4858]: I0127 20:12:57.538782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerDied","Data":"48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508"} Jan 27 20:12:57 crc kubenswrapper[4858]: I0127 20:12:57.538840 4858 scope.go:117] "RemoveContainer" containerID="b631c59911ea0daf17844c62c6bc8ec81b586fbb239a802b7dd36040959d1a09" Jan 27 20:12:57 crc kubenswrapper[4858]: I0127 20:12:57.539466 4858 scope.go:117] "RemoveContainer" containerID="48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508" Jan 27 20:12:57 crc kubenswrapper[4858]: E0127 20:12:57.539781 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 40s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.377103 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.377213 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.546263 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.546796 4858 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2" exitCode=137 Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.546894 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.547153 4858 scope.go:117] "RemoveContainer" containerID="82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.555655 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/3.log" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.556848 4858 scope.go:117] "RemoveContainer" containerID="48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508" Jan 27 20:12:58 crc kubenswrapper[4858]: E0127 20:12:58.557424 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 40s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.566286 4858 scope.go:117] "RemoveContainer" containerID="82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2" Jan 27 20:12:58 crc kubenswrapper[4858]: E0127 20:12:58.566932 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2\": container with ID starting with 82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2 not found: ID does not exist" containerID="82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.566989 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2"} err="failed to get container status \"82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2\": rpc error: code = NotFound desc = could not find container \"82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2\": container with ID starting with 82fa22465664468c89bd98ff7e3c12308c0ec1e10ff355833b7faa942e885aa2 not found: ID does not exist" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.571716 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.571789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.571958 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.571999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572040 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572102 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572175 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572606 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572632 4858 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572644 4858 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.572659 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.581896 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:12:58 crc kubenswrapper[4858]: I0127 20:12:58.674150 4858 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:00 crc kubenswrapper[4858]: I0127 20:13:00.079160 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 20:13:02 crc kubenswrapper[4858]: I0127 20:13:02.707985 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:13:02 crc kubenswrapper[4858]: I0127 20:13:02.708861 4858 scope.go:117] "RemoveContainer" containerID="48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508" Jan 27 20:13:02 crc kubenswrapper[4858]: E0127 20:13:02.709063 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 40s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:13:03 crc kubenswrapper[4858]: I0127 20:13:03.641039 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 20:13:03 crc kubenswrapper[4858]: I0127 20:13:03.661345 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 20:13:04 crc kubenswrapper[4858]: I0127 20:13:04.499890 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 20:13:04 crc kubenswrapper[4858]: I0127 20:13:04.523763 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 20:13:05 crc kubenswrapper[4858]: I0127 20:13:05.793816 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 20:13:05 crc kubenswrapper[4858]: I0127 20:13:05.877940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 20:13:06 crc kubenswrapper[4858]: I0127 20:13:06.522743 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 20:13:06 crc kubenswrapper[4858]: I0127 20:13:06.573985 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 20:13:06 crc kubenswrapper[4858]: I0127 20:13:06.794735 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 20:13:07 crc kubenswrapper[4858]: I0127 20:13:07.251409 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 20:13:07 crc kubenswrapper[4858]: I0127 20:13:07.875998 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 20:13:07 crc kubenswrapper[4858]: I0127 20:13:07.929721 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 20:13:08 crc kubenswrapper[4858]: I0127 20:13:08.632252 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 20:13:08 crc kubenswrapper[4858]: I0127 20:13:08.911852 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 20:13:08 crc kubenswrapper[4858]: I0127 20:13:08.989511 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 20:13:09 crc kubenswrapper[4858]: I0127 20:13:09.188031 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 20:13:09 crc kubenswrapper[4858]: I0127 20:13:09.195757 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 20:13:09 crc kubenswrapper[4858]: I0127 20:13:09.828693 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 20:13:09 crc kubenswrapper[4858]: I0127 20:13:09.900365 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 20:13:10 crc kubenswrapper[4858]: I0127 20:13:10.123369 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 20:13:10 crc kubenswrapper[4858]: I0127 20:13:10.460181 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.133770 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.283485 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.377164 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.527144 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.803853 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.860854 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.941361 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 20:13:11 crc kubenswrapper[4858]: I0127 20:13:11.943244 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 20:13:12 crc kubenswrapper[4858]: I0127 20:13:12.451493 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 20:13:13 crc kubenswrapper[4858]: I0127 20:13:13.054973 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 20:13:13 crc kubenswrapper[4858]: I0127 20:13:13.260506 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 20:13:13 crc kubenswrapper[4858]: I0127 20:13:13.343477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 20:13:13 crc kubenswrapper[4858]: I0127 20:13:13.605720 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 20:13:13 crc kubenswrapper[4858]: I0127 20:13:13.663433 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 20:13:14 crc kubenswrapper[4858]: I0127 20:13:14.043360 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 20:13:14 crc kubenswrapper[4858]: I0127 20:13:14.623400 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 20:13:15 crc kubenswrapper[4858]: I0127 20:13:15.023375 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 20:13:15 crc kubenswrapper[4858]: I0127 20:13:15.092711 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 20:13:15 crc kubenswrapper[4858]: I0127 20:13:15.371997 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 20:13:15 crc kubenswrapper[4858]: I0127 20:13:15.492879 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 20:13:15 crc kubenswrapper[4858]: I0127 20:13:15.734320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 20:13:15 crc kubenswrapper[4858]: I0127 20:13:15.984297 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 20:13:16 crc kubenswrapper[4858]: I0127 20:13:16.074750 4858 scope.go:117] "RemoveContainer" containerID="48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508" Jan 27 20:13:16 crc kubenswrapper[4858]: E0127 20:13:16.074998 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 40s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:13:16 crc kubenswrapper[4858]: I0127 20:13:16.118684 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 20:13:16 crc kubenswrapper[4858]: I0127 20:13:16.130920 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 20:13:16 crc kubenswrapper[4858]: I0127 20:13:16.204313 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 20:13:16 crc kubenswrapper[4858]: I0127 20:13:16.624916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 20:13:16 crc kubenswrapper[4858]: I0127 20:13:16.832757 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 20:13:16 crc kubenswrapper[4858]: I0127 20:13:16.877473 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 20:13:17 crc kubenswrapper[4858]: I0127 20:13:17.209216 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 20:13:18 crc kubenswrapper[4858]: I0127 20:13:18.286584 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 20:13:18 crc kubenswrapper[4858]: I0127 20:13:18.405298 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 20:13:18 crc kubenswrapper[4858]: I0127 20:13:18.534294 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 20:13:18 crc kubenswrapper[4858]: I0127 20:13:18.764491 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.160438 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.298762 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.665537 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.667014 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.667059 4858 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="15f23e9dd88779c49a1c0a606155ff3f7b5cbc4d1c4b401ce54e9e653054bafd" exitCode=137 Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.667095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"15f23e9dd88779c49a1c0a606155ff3f7b5cbc4d1c4b401ce54e9e653054bafd"} Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.667131 4858 scope.go:117] "RemoveContainer" containerID="e27578d067abba0ccd0c7459aca7d021f694440668d2ce3026354a9e8d5fd6a5" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.755845 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.793488 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 20:13:19 crc kubenswrapper[4858]: I0127 20:13:19.887512 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 20:13:20 crc kubenswrapper[4858]: I0127 20:13:20.051031 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 20:13:20 crc kubenswrapper[4858]: I0127 20:13:20.592130 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 20:13:20 crc kubenswrapper[4858]: I0127 20:13:20.674192 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 27 20:13:20 crc kubenswrapper[4858]: I0127 20:13:20.675278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06035551478e79da75ec21a7693eaf149e27a4d5068914364c7771050e7dbbef"} Jan 27 20:13:21 crc kubenswrapper[4858]: I0127 20:13:21.058067 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 20:13:21 crc kubenswrapper[4858]: I0127 20:13:21.059722 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 20:13:21 crc kubenswrapper[4858]: I0127 20:13:21.449058 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 20:13:21 crc kubenswrapper[4858]: I0127 20:13:21.480580 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 20:13:21 crc kubenswrapper[4858]: I0127 20:13:21.620245 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 20:13:21 crc kubenswrapper[4858]: I0127 20:13:21.982343 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 20:13:22 crc kubenswrapper[4858]: I0127 20:13:22.290364 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 20:13:22 crc kubenswrapper[4858]: I0127 20:13:22.913608 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 20:13:23 crc kubenswrapper[4858]: I0127 20:13:23.676112 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 20:13:23 crc kubenswrapper[4858]: I0127 20:13:23.982048 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 20:13:23 crc kubenswrapper[4858]: I0127 20:13:23.983445 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 20:13:24 crc kubenswrapper[4858]: I0127 20:13:24.367994 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 20:13:24 crc kubenswrapper[4858]: I0127 20:13:24.456286 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 20:13:24 crc kubenswrapper[4858]: I0127 20:13:24.700489 4858 generic.go:334] "Generic (PLEG): container finished" podID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerID="efb59333516b70197bce8799ad8c5a0a47720e9ba044fff40ce02cf45e14988e" exitCode=0 Jan 27 20:13:24 crc kubenswrapper[4858]: I0127 20:13:24.700593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" event={"ID":"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d","Type":"ContainerDied","Data":"efb59333516b70197bce8799ad8c5a0a47720e9ba044fff40ce02cf45e14988e"} Jan 27 20:13:24 crc kubenswrapper[4858]: I0127 20:13:24.701312 4858 scope.go:117] "RemoveContainer" containerID="efb59333516b70197bce8799ad8c5a0a47720e9ba044fff40ce02cf45e14988e" Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.063703 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.156714 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.325288 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.411821 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.708598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" event={"ID":"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d","Type":"ContainerStarted","Data":"8d0105146cfe7d4576dbcf760ad1195e4cabdd2a2738d03ea670f9e227012eda"} Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.709437 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.710691 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:13:25 crc kubenswrapper[4858]: I0127 20:13:25.953278 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 20:13:26 crc kubenswrapper[4858]: I0127 20:13:26.200523 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 20:13:26 crc kubenswrapper[4858]: I0127 20:13:26.547204 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 20:13:26 crc kubenswrapper[4858]: I0127 20:13:26.742611 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 20:13:26 crc kubenswrapper[4858]: I0127 20:13:26.790433 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 20:13:26 crc kubenswrapper[4858]: I0127 20:13:26.865167 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 20:13:26 crc kubenswrapper[4858]: I0127 20:13:26.945736 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 20:13:27 crc kubenswrapper[4858]: I0127 20:13:27.035110 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 20:13:27 crc kubenswrapper[4858]: I0127 20:13:27.166963 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 20:13:27 crc kubenswrapper[4858]: I0127 20:13:27.312242 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:13:27 crc kubenswrapper[4858]: I0127 20:13:27.914188 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.111101 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.179212 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.196042 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.231790 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.280435 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.344589 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.369390 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.476143 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 20:13:28 crc kubenswrapper[4858]: I0127 20:13:28.692010 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.018086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.025641 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.070300 4858 scope.go:117] "RemoveContainer" containerID="48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508" Jan 27 20:13:29 crc kubenswrapper[4858]: E0127 20:13:29.070489 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 40s restarting failed container=oauth-openshift pod=oauth-openshift-fc667b7f-p64j4_openshift-authentication(18b4ebfb-ef47-4893-9f78-d6562b229c0c)\"" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" podUID="18b4ebfb-ef47-4893-9f78-d6562b229c0c" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.222488 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.306433 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.328652 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.328720 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.736203 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 20:13:29 crc kubenswrapper[4858]: I0127 20:13:29.947348 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 20:13:30 crc kubenswrapper[4858]: I0127 20:13:30.274776 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 20:13:31 crc kubenswrapper[4858]: I0127 20:13:31.978175 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 20:13:32 crc kubenswrapper[4858]: I0127 20:13:32.919485 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 20:13:34 crc kubenswrapper[4858]: I0127 20:13:34.333597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 20:13:34 crc kubenswrapper[4858]: I0127 20:13:34.477773 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 20:13:35 crc kubenswrapper[4858]: I0127 20:13:35.260065 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 20:13:35 crc kubenswrapper[4858]: I0127 20:13:35.373511 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 20:13:35 crc kubenswrapper[4858]: I0127 20:13:35.645316 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 20:13:36 crc kubenswrapper[4858]: I0127 20:13:36.573227 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 20:13:43 crc kubenswrapper[4858]: I0127 20:13:43.071449 4858 scope.go:117] "RemoveContainer" containerID="48f10328af54ec436b852c49d45ee2f38d43f56a2788ee9143324a9adaa1b508" Jan 27 20:13:43 crc kubenswrapper[4858]: I0127 20:13:43.809136 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-fc667b7f-p64j4_18b4ebfb-ef47-4893-9f78-d6562b229c0c/oauth-openshift/3.log" Jan 27 20:13:43 crc kubenswrapper[4858]: I0127 20:13:43.809387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" event={"ID":"18b4ebfb-ef47-4893-9f78-d6562b229c0c","Type":"ContainerStarted","Data":"e6da7b1e96b3194d48e56a7172ef532f1c6866bc168e732091d27f4fa5378b97"} Jan 27 20:13:43 crc kubenswrapper[4858]: I0127 20:13:43.810013 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:13:43 crc kubenswrapper[4858]: I0127 20:13:43.817391 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-fc667b7f-p64j4" Jan 27 20:13:51 crc kubenswrapper[4858]: I0127 20:13:51.802441 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dvbh6"] Jan 27 20:13:51 crc kubenswrapper[4858]: I0127 20:13:51.803363 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" podUID="bb41d7df-dacd-41b0-8399-63ddcee318f6" containerName="controller-manager" containerID="cri-o://a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8" gracePeriod=30 Jan 27 20:13:51 crc kubenswrapper[4858]: I0127 20:13:51.935760 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8"] Jan 27 20:13:51 crc kubenswrapper[4858]: I0127 20:13:51.935975 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" podUID="3f3f573f-78f3-46f9-8db7-c3df5ca093e9" containerName="route-controller-manager" containerID="cri-o://bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585" gracePeriod=30 Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.188604 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.286733 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.343069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-client-ca\") pod \"bb41d7df-dacd-41b0-8399-63ddcee318f6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.343144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-proxy-ca-bundles\") pod \"bb41d7df-dacd-41b0-8399-63ddcee318f6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.343210 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8dkm\" (UniqueName: \"kubernetes.io/projected/bb41d7df-dacd-41b0-8399-63ddcee318f6-kube-api-access-v8dkm\") pod \"bb41d7df-dacd-41b0-8399-63ddcee318f6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.343250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-config\") pod \"bb41d7df-dacd-41b0-8399-63ddcee318f6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.343288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb41d7df-dacd-41b0-8399-63ddcee318f6-serving-cert\") pod \"bb41d7df-dacd-41b0-8399-63ddcee318f6\" (UID: \"bb41d7df-dacd-41b0-8399-63ddcee318f6\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.343331 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsb98\" (UniqueName: \"kubernetes.io/projected/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-kube-api-access-bsb98\") pod \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.343401 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-config\") pod \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.344676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bb41d7df-dacd-41b0-8399-63ddcee318f6" (UID: "bb41d7df-dacd-41b0-8399-63ddcee318f6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.344673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "bb41d7df-dacd-41b0-8399-63ddcee318f6" (UID: "bb41d7df-dacd-41b0-8399-63ddcee318f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.344727 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-config" (OuterVolumeSpecName: "config") pod "3f3f573f-78f3-46f9-8db7-c3df5ca093e9" (UID: "3f3f573f-78f3-46f9-8db7-c3df5ca093e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.344840 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-config" (OuterVolumeSpecName: "config") pod "bb41d7df-dacd-41b0-8399-63ddcee318f6" (UID: "bb41d7df-dacd-41b0-8399-63ddcee318f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.349179 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb41d7df-dacd-41b0-8399-63ddcee318f6-kube-api-access-v8dkm" (OuterVolumeSpecName: "kube-api-access-v8dkm") pod "bb41d7df-dacd-41b0-8399-63ddcee318f6" (UID: "bb41d7df-dacd-41b0-8399-63ddcee318f6"). InnerVolumeSpecName "kube-api-access-v8dkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.349432 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb41d7df-dacd-41b0-8399-63ddcee318f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bb41d7df-dacd-41b0-8399-63ddcee318f6" (UID: "bb41d7df-dacd-41b0-8399-63ddcee318f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.350233 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-kube-api-access-bsb98" (OuterVolumeSpecName: "kube-api-access-bsb98") pod "3f3f573f-78f3-46f9-8db7-c3df5ca093e9" (UID: "3f3f573f-78f3-46f9-8db7-c3df5ca093e9"). InnerVolumeSpecName "kube-api-access-bsb98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447137 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-serving-cert\") pod \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-client-ca\") pod \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\" (UID: \"3f3f573f-78f3-46f9-8db7-c3df5ca093e9\") " Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447605 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8dkm\" (UniqueName: \"kubernetes.io/projected/bb41d7df-dacd-41b0-8399-63ddcee318f6-kube-api-access-v8dkm\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447627 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447642 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb41d7df-dacd-41b0-8399-63ddcee318f6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447656 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsb98\" (UniqueName: \"kubernetes.io/projected/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-kube-api-access-bsb98\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447667 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447678 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.447689 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bb41d7df-dacd-41b0-8399-63ddcee318f6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.448195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "3f3f573f-78f3-46f9-8db7-c3df5ca093e9" (UID: "3f3f573f-78f3-46f9-8db7-c3df5ca093e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.461165 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3f3f573f-78f3-46f9-8db7-c3df5ca093e9" (UID: "3f3f573f-78f3-46f9-8db7-c3df5ca093e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.548748 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.548788 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f3f573f-78f3-46f9-8db7-c3df5ca093e9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.868042 4858 generic.go:334] "Generic (PLEG): container finished" podID="3f3f573f-78f3-46f9-8db7-c3df5ca093e9" containerID="bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585" exitCode=0 Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.868139 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.868136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" event={"ID":"3f3f573f-78f3-46f9-8db7-c3df5ca093e9","Type":"ContainerDied","Data":"bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585"} Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.868250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8" event={"ID":"3f3f573f-78f3-46f9-8db7-c3df5ca093e9","Type":"ContainerDied","Data":"10539c960846aec2d551b8cb991f4a84182ea0615bb88728806bc4c11b668f52"} Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.868283 4858 scope.go:117] "RemoveContainer" containerID="bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.872066 4858 generic.go:334] "Generic (PLEG): container finished" podID="bb41d7df-dacd-41b0-8399-63ddcee318f6" containerID="a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8" exitCode=0 Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.872096 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.872119 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" event={"ID":"bb41d7df-dacd-41b0-8399-63ddcee318f6","Type":"ContainerDied","Data":"a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8"} Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.872160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dvbh6" event={"ID":"bb41d7df-dacd-41b0-8399-63ddcee318f6","Type":"ContainerDied","Data":"3413fabb5b3c230ef3f484c78fc856c1b1f54993d12809da50610da5c87ba5a5"} Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.889687 4858 scope.go:117] "RemoveContainer" containerID="bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585" Jan 27 20:13:52 crc kubenswrapper[4858]: E0127 20:13:52.890564 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585\": container with ID starting with bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585 not found: ID does not exist" containerID="bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.890611 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585"} err="failed to get container status \"bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585\": rpc error: code = NotFound desc = could not find container \"bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585\": container with ID starting with bd5e394ca6cce8b4570c896daaaf2d7f18794bb788d9c15aa4ad431053e3f585 not found: ID does not exist" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.890666 4858 scope.go:117] "RemoveContainer" containerID="a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.924238 4858 scope.go:117] "RemoveContainer" containerID="a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.925801 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dvbh6"] Jan 27 20:13:52 crc kubenswrapper[4858]: E0127 20:13:52.927807 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8\": container with ID starting with a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8 not found: ID does not exist" containerID="a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.928342 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8"} err="failed to get container status \"a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8\": rpc error: code = NotFound desc = could not find container \"a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8\": container with ID starting with a06c97e538f734aa5d35c267ea97be53b5daa0eed12c6d6be8921831ae24b6f8 not found: ID does not exist" Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.932109 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dvbh6"] Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.937874 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8"] Jan 27 20:13:52 crc kubenswrapper[4858]: I0127 20:13:52.942443 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5rpw8"] Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.086637 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl"] Jan 27 20:13:53 crc kubenswrapper[4858]: E0127 20:13:53.086928 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f3f573f-78f3-46f9-8db7-c3df5ca093e9" containerName="route-controller-manager" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.086940 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f3f573f-78f3-46f9-8db7-c3df5ca093e9" containerName="route-controller-manager" Jan 27 20:13:53 crc kubenswrapper[4858]: E0127 20:13:53.086951 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb41d7df-dacd-41b0-8399-63ddcee318f6" containerName="controller-manager" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.086961 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb41d7df-dacd-41b0-8399-63ddcee318f6" containerName="controller-manager" Jan 27 20:13:53 crc kubenswrapper[4858]: E0127 20:13:53.086981 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.086988 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 20:13:53 crc kubenswrapper[4858]: E0127 20:13:53.086999 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" containerName="installer" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.087006 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" containerName="installer" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.087136 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb41d7df-dacd-41b0-8399-63ddcee318f6" containerName="controller-manager" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.087149 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.087162 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50de7cf-1829-431e-8655-0e948b1695f7" containerName="installer" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.087176 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f3f573f-78f3-46f9-8db7-c3df5ca093e9" containerName="route-controller-manager" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.087629 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.092699 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.092982 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.093054 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.093429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.094456 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.095906 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.098404 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f4845cc87-n8rnz"] Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.099774 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.103048 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.103320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.104697 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.104856 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.105699 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.107060 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.113101 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.115247 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl"] Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.119095 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f4845cc87-n8rnz"] Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-client-ca\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-proxy-ca-bundles\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258738 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-config\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258770 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-config\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wmg8\" (UniqueName: \"kubernetes.io/projected/3e179bef-d974-45fa-abf8-dfe3901ae243-kube-api-access-9wmg8\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258837 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks728\" (UniqueName: \"kubernetes.io/projected/4b766be6-369c-4ea7-889f-67b2d3d1c205-kube-api-access-ks728\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e179bef-d974-45fa-abf8-dfe3901ae243-serving-cert\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258905 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b766be6-369c-4ea7-889f-67b2d3d1c205-serving-cert\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.258942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-client-ca\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.359857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-proxy-ca-bundles\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.359946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-config\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.359979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-config\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.360028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wmg8\" (UniqueName: \"kubernetes.io/projected/3e179bef-d974-45fa-abf8-dfe3901ae243-kube-api-access-9wmg8\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.360069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks728\" (UniqueName: \"kubernetes.io/projected/4b766be6-369c-4ea7-889f-67b2d3d1c205-kube-api-access-ks728\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.360098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e179bef-d974-45fa-abf8-dfe3901ae243-serving-cert\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.360132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b766be6-369c-4ea7-889f-67b2d3d1c205-serving-cert\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.360154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-client-ca\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.360185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-client-ca\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.364179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-config\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.364952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-client-ca\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.365363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-config\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.365423 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4b766be6-369c-4ea7-889f-67b2d3d1c205-proxy-ca-bundles\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.365992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-client-ca\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.368248 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b766be6-369c-4ea7-889f-67b2d3d1c205-serving-cert\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.369644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e179bef-d974-45fa-abf8-dfe3901ae243-serving-cert\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.376250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks728\" (UniqueName: \"kubernetes.io/projected/4b766be6-369c-4ea7-889f-67b2d3d1c205-kube-api-access-ks728\") pod \"controller-manager-5f4845cc87-n8rnz\" (UID: \"4b766be6-369c-4ea7-889f-67b2d3d1c205\") " pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.379019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wmg8\" (UniqueName: \"kubernetes.io/projected/3e179bef-d974-45fa-abf8-dfe3901ae243-kube-api-access-9wmg8\") pod \"route-controller-manager-645d5c8f55-4tzcl\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.413164 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.422499 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.585515 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl"] Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.621369 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f4845cc87-n8rnz"] Jan 27 20:13:53 crc kubenswrapper[4858]: W0127 20:13:53.630815 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b766be6_369c_4ea7_889f_67b2d3d1c205.slice/crio-62b2a64f9e23e4d8f7b2e51abafc3ed6dfb8b1c9cd3604350e528d6062edaeed WatchSource:0}: Error finding container 62b2a64f9e23e4d8f7b2e51abafc3ed6dfb8b1c9cd3604350e528d6062edaeed: Status 404 returned error can't find the container with id 62b2a64f9e23e4d8f7b2e51abafc3ed6dfb8b1c9cd3604350e528d6062edaeed Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.878769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" event={"ID":"3e179bef-d974-45fa-abf8-dfe3901ae243","Type":"ContainerStarted","Data":"ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2"} Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.878808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" event={"ID":"3e179bef-d974-45fa-abf8-dfe3901ae243","Type":"ContainerStarted","Data":"506d1688b9e34ce6cf0c16fef6bed6cf5247d265e55c9783eed334fb09dd5846"} Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.879734 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.882037 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" event={"ID":"4b766be6-369c-4ea7-889f-67b2d3d1c205","Type":"ContainerStarted","Data":"9ab1cf9b29c268a468114c4a1b897409ecbbba6e045eb29a2c549f67432b988c"} Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.882088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" event={"ID":"4b766be6-369c-4ea7-889f-67b2d3d1c205","Type":"ContainerStarted","Data":"62b2a64f9e23e4d8f7b2e51abafc3ed6dfb8b1c9cd3604350e528d6062edaeed"} Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.882255 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.886171 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.897393 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" podStartSLOduration=1.89737935 podStartE2EDuration="1.89737935s" podCreationTimestamp="2026-01-27 20:13:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:53.894027527 +0000 UTC m=+378.601843233" watchObservedRunningTime="2026-01-27 20:13:53.89737935 +0000 UTC m=+378.605195046" Jan 27 20:13:53 crc kubenswrapper[4858]: I0127 20:13:53.918252 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f4845cc87-n8rnz" podStartSLOduration=2.918229206 podStartE2EDuration="2.918229206s" podCreationTimestamp="2026-01-27 20:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:13:53.915058269 +0000 UTC m=+378.622873985" watchObservedRunningTime="2026-01-27 20:13:53.918229206 +0000 UTC m=+378.626044932" Jan 27 20:13:54 crc kubenswrapper[4858]: I0127 20:13:54.080584 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f3f573f-78f3-46f9-8db7-c3df5ca093e9" path="/var/lib/kubelet/pods/3f3f573f-78f3-46f9-8db7-c3df5ca093e9/volumes" Jan 27 20:13:54 crc kubenswrapper[4858]: I0127 20:13:54.081454 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb41d7df-dacd-41b0-8399-63ddcee318f6" path="/var/lib/kubelet/pods/bb41d7df-dacd-41b0-8399-63ddcee318f6/volumes" Jan 27 20:13:54 crc kubenswrapper[4858]: I0127 20:13:54.090484 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:13:59 crc kubenswrapper[4858]: I0127 20:13:59.330078 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:13:59 crc kubenswrapper[4858]: I0127 20:13:59.330778 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:14:11 crc kubenswrapper[4858]: I0127 20:14:11.020617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:14:11 crc kubenswrapper[4858]: I0127 20:14:11.021973 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:14:11 crc kubenswrapper[4858]: I0127 20:14:11.022798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:14:11 crc kubenswrapper[4858]: I0127 20:14:11.030353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:14:11 crc kubenswrapper[4858]: I0127 20:14:11.172037 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 20:14:11 crc kubenswrapper[4858]: W0127 20:14:11.648861 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-1625487880b9bf6f24a91bfd3563694e23e6999eab4466d203da65d3430e3f47 WatchSource:0}: Error finding container 1625487880b9bf6f24a91bfd3563694e23e6999eab4466d203da65d3430e3f47: Status 404 returned error can't find the container with id 1625487880b9bf6f24a91bfd3563694e23e6999eab4466d203da65d3430e3f47 Jan 27 20:14:11 crc kubenswrapper[4858]: I0127 20:14:11.986906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4b21a3656c6f7e217ea1c2bb2ccd377f1735275573404f728e294161602d1768"} Jan 27 20:14:11 crc kubenswrapper[4858]: I0127 20:14:11.986992 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1625487880b9bf6f24a91bfd3563694e23e6999eab4466d203da65d3430e3f47"} Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.144697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.144784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.150417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.154378 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.272172 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.272198 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 20:14:12 crc kubenswrapper[4858]: W0127 20:14:12.727046 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1cb42a90950d1a99de922ed8fe8d32a163a7cf75ab5be81b9b649c5bf197dae8 WatchSource:0}: Error finding container 1cb42a90950d1a99de922ed8fe8d32a163a7cf75ab5be81b9b649c5bf197dae8: Status 404 returned error can't find the container with id 1cb42a90950d1a99de922ed8fe8d32a163a7cf75ab5be81b9b649c5bf197dae8 Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.995317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"939e8963c8fa9294bebd3bfe768d2f03b28779138f883dab10dc828753a31401"} Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.995661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1cb42a90950d1a99de922ed8fe8d32a163a7cf75ab5be81b9b649c5bf197dae8"} Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.995832 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.997465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9db9bb1b060f4528ef75f2508ce872caff769b60ddf35e3ac9aa96f1e40fd648"} Jan 27 20:14:12 crc kubenswrapper[4858]: I0127 20:14:12.997502 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8e6340262854ab6aa172e00ae13b2bcb86af4278265ec579c123cf1c03578b14"} Jan 27 20:14:29 crc kubenswrapper[4858]: I0127 20:14:29.329447 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:14:29 crc kubenswrapper[4858]: I0127 20:14:29.329952 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:14:29 crc kubenswrapper[4858]: I0127 20:14:29.330001 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:14:29 crc kubenswrapper[4858]: I0127 20:14:29.330632 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f523d2a034fb7aa3deeabfd7fe2846140bad94ae6e8919a72e4a06a8629bcf50"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:14:29 crc kubenswrapper[4858]: I0127 20:14:29.330713 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://f523d2a034fb7aa3deeabfd7fe2846140bad94ae6e8919a72e4a06a8629bcf50" gracePeriod=600 Jan 27 20:14:30 crc kubenswrapper[4858]: I0127 20:14:30.099139 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="f523d2a034fb7aa3deeabfd7fe2846140bad94ae6e8919a72e4a06a8629bcf50" exitCode=0 Jan 27 20:14:30 crc kubenswrapper[4858]: I0127 20:14:30.099635 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"f523d2a034fb7aa3deeabfd7fe2846140bad94ae6e8919a72e4a06a8629bcf50"} Jan 27 20:14:30 crc kubenswrapper[4858]: I0127 20:14:30.099666 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"97a523bb07aac1f1bb0ad85b2296648a7841f219815ab2eac986bfc2fc387de8"} Jan 27 20:14:30 crc kubenswrapper[4858]: I0127 20:14:30.099686 4858 scope.go:117] "RemoveContainer" containerID="e5bec75f341e43328598c3d7b3d1726b948af90cf11d870fd38e0de5263b7689" Jan 27 20:14:30 crc kubenswrapper[4858]: I0127 20:14:30.826207 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r69km"] Jan 27 20:14:30 crc kubenswrapper[4858]: I0127 20:14:30.826518 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r69km" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="registry-server" containerID="cri-o://166e9bcf2bd3b7203815da4c9ae4a319eb0c83f84d51fb9b0c42dfd306c09418" gracePeriod=2 Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.112832 4858 generic.go:334] "Generic (PLEG): container finished" podID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerID="166e9bcf2bd3b7203815da4c9ae4a319eb0c83f84d51fb9b0c42dfd306c09418" exitCode=0 Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.112885 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r69km" event={"ID":"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a","Type":"ContainerDied","Data":"166e9bcf2bd3b7203815da4c9ae4a319eb0c83f84d51fb9b0c42dfd306c09418"} Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.389385 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.396848 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrlh6\" (UniqueName: \"kubernetes.io/projected/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-kube-api-access-jrlh6\") pod \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.396918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-utilities\") pod \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.396946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-catalog-content\") pod \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\" (UID: \"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a\") " Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.398347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-utilities" (OuterVolumeSpecName: "utilities") pod "d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" (UID: "d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.405512 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-kube-api-access-jrlh6" (OuterVolumeSpecName: "kube-api-access-jrlh6") pod "d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" (UID: "d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a"). InnerVolumeSpecName "kube-api-access-jrlh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.427717 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" (UID: "d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.497822 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrlh6\" (UniqueName: \"kubernetes.io/projected/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-kube-api-access-jrlh6\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.497865 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:31 crc kubenswrapper[4858]: I0127 20:14:31.497879 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:32 crc kubenswrapper[4858]: I0127 20:14:32.123771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r69km" event={"ID":"d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a","Type":"ContainerDied","Data":"9701b934e9478a2313bd1aca320bef1a7eec9ba5a784ca52350fc8d885a48237"} Jan 27 20:14:32 crc kubenswrapper[4858]: I0127 20:14:32.124085 4858 scope.go:117] "RemoveContainer" containerID="166e9bcf2bd3b7203815da4c9ae4a319eb0c83f84d51fb9b0c42dfd306c09418" Jan 27 20:14:32 crc kubenswrapper[4858]: I0127 20:14:32.124008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r69km" Jan 27 20:14:32 crc kubenswrapper[4858]: I0127 20:14:32.156842 4858 scope.go:117] "RemoveContainer" containerID="239ea1bc70093e8a144d1060556591b2b43840db112b7c5f483786bf05e11380" Jan 27 20:14:32 crc kubenswrapper[4858]: I0127 20:14:32.160096 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r69km"] Jan 27 20:14:32 crc kubenswrapper[4858]: I0127 20:14:32.164264 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r69km"] Jan 27 20:14:32 crc kubenswrapper[4858]: I0127 20:14:32.171436 4858 scope.go:117] "RemoveContainer" containerID="f40ccdd5765bb6f1c3c437ba01554e388be38268f02abc05de4fe0b8b01e9205" Jan 27 20:14:34 crc kubenswrapper[4858]: I0127 20:14:34.077109 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" path="/var/lib/kubelet/pods/d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a/volumes" Jan 27 20:14:42 crc kubenswrapper[4858]: I0127 20:14:42.278349 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.599316 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xt9cz"] Jan 27 20:14:45 crc kubenswrapper[4858]: E0127 20:14:45.599931 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="extract-content" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.599946 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="extract-content" Jan 27 20:14:45 crc kubenswrapper[4858]: E0127 20:14:45.599959 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="registry-server" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.599965 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="registry-server" Jan 27 20:14:45 crc kubenswrapper[4858]: E0127 20:14:45.599988 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="extract-utilities" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.599996 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="extract-utilities" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.600134 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d01f79-dd9f-4af9-a443-3ffd38cc8a1a" containerName="registry-server" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.600660 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.621794 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xt9cz"] Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.789675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-trusted-ca\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.789726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxsmq\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-kube-api-access-bxsmq\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.789757 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.789857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-registry-certificates\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.789912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.789980 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-registry-tls\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.790064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.790209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-bound-sa-token\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.817192 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891004 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-registry-certificates\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891052 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-registry-tls\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891109 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-bound-sa-token\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-trusted-ca\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxsmq\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-kube-api-access-bxsmq\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.891765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.892870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-registry-certificates\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.893391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-trusted-ca\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.903640 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.903713 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-registry-tls\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.910023 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-bound-sa-token\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.910457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxsmq\" (UniqueName: \"kubernetes.io/projected/d34f2c57-44d1-437a-9f7c-a55d37b7dfd2-kube-api-access-bxsmq\") pod \"image-registry-66df7c8f76-xt9cz\" (UID: \"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2\") " pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:45 crc kubenswrapper[4858]: I0127 20:14:45.919207 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:46 crc kubenswrapper[4858]: I0127 20:14:46.383302 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xt9cz"] Jan 27 20:14:47 crc kubenswrapper[4858]: I0127 20:14:47.210584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" event={"ID":"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2","Type":"ContainerStarted","Data":"50e033dc89790c07557ba2ae004ee85dde735af93e1647e2a8e0a05eab6b955f"} Jan 27 20:14:47 crc kubenswrapper[4858]: I0127 20:14:47.210864 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:14:47 crc kubenswrapper[4858]: I0127 20:14:47.210876 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" event={"ID":"d34f2c57-44d1-437a-9f7c-a55d37b7dfd2","Type":"ContainerStarted","Data":"02ad7e4773d4083d83d114f17a40608e65d0ce76aa125fd199f2743e9a427ec5"} Jan 27 20:14:47 crc kubenswrapper[4858]: I0127 20:14:47.231904 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" podStartSLOduration=2.231881786 podStartE2EDuration="2.231881786s" podCreationTimestamp="2026-01-27 20:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:14:47.228832655 +0000 UTC m=+431.936648381" watchObservedRunningTime="2026-01-27 20:14:47.231881786 +0000 UTC m=+431.939697492" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.224091 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hps49"] Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.224645 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hps49" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="registry-server" containerID="cri-o://041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" gracePeriod=2 Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.437337 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rl5k9"] Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.437809 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rl5k9" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="registry-server" containerID="cri-o://d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0" gracePeriod=2 Jan 27 20:14:51 crc kubenswrapper[4858]: E0127 20:14:51.626197 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3 is running failed: container process not found" containerID="041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 20:14:51 crc kubenswrapper[4858]: E0127 20:14:51.627652 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3 is running failed: container process not found" containerID="041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 20:14:51 crc kubenswrapper[4858]: E0127 20:14:51.628361 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3 is running failed: container process not found" containerID="041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 20:14:51 crc kubenswrapper[4858]: E0127 20:14:51.628446 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-hps49" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="registry-server" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.691216 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.775127 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72p4f\" (UniqueName: \"kubernetes.io/projected/26fc1461-1071-4f74-9d54-4de6f9a268dc-kube-api-access-72p4f\") pod \"26fc1461-1071-4f74-9d54-4de6f9a268dc\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.775181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-catalog-content\") pod \"26fc1461-1071-4f74-9d54-4de6f9a268dc\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.775204 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-utilities\") pod \"26fc1461-1071-4f74-9d54-4de6f9a268dc\" (UID: \"26fc1461-1071-4f74-9d54-4de6f9a268dc\") " Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.776512 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-utilities" (OuterVolumeSpecName: "utilities") pod "26fc1461-1071-4f74-9d54-4de6f9a268dc" (UID: "26fc1461-1071-4f74-9d54-4de6f9a268dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.793471 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26fc1461-1071-4f74-9d54-4de6f9a268dc-kube-api-access-72p4f" (OuterVolumeSpecName: "kube-api-access-72p4f") pod "26fc1461-1071-4f74-9d54-4de6f9a268dc" (UID: "26fc1461-1071-4f74-9d54-4de6f9a268dc"). InnerVolumeSpecName "kube-api-access-72p4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.797408 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl"] Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.797698 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" podUID="3e179bef-d974-45fa-abf8-dfe3901ae243" containerName="route-controller-manager" containerID="cri-o://ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2" gracePeriod=30 Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.848901 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.883640 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72p4f\" (UniqueName: \"kubernetes.io/projected/26fc1461-1071-4f74-9d54-4de6f9a268dc-kube-api-access-72p4f\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.883674 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.928389 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26fc1461-1071-4f74-9d54-4de6f9a268dc" (UID: "26fc1461-1071-4f74-9d54-4de6f9a268dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.985176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzdhb\" (UniqueName: \"kubernetes.io/projected/a57b4016-f4b5-4f01-aeed-9a730cd323c1-kube-api-access-bzdhb\") pod \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.985333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-catalog-content\") pod \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.985359 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-utilities\") pod \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\" (UID: \"a57b4016-f4b5-4f01-aeed-9a730cd323c1\") " Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.985649 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26fc1461-1071-4f74-9d54-4de6f9a268dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.986473 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-utilities" (OuterVolumeSpecName: "utilities") pod "a57b4016-f4b5-4f01-aeed-9a730cd323c1" (UID: "a57b4016-f4b5-4f01-aeed-9a730cd323c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:51 crc kubenswrapper[4858]: I0127 20:14:51.987967 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a57b4016-f4b5-4f01-aeed-9a730cd323c1-kube-api-access-bzdhb" (OuterVolumeSpecName: "kube-api-access-bzdhb") pod "a57b4016-f4b5-4f01-aeed-9a730cd323c1" (UID: "a57b4016-f4b5-4f01-aeed-9a730cd323c1"). InnerVolumeSpecName "kube-api-access-bzdhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.037956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a57b4016-f4b5-4f01-aeed-9a730cd323c1" (UID: "a57b4016-f4b5-4f01-aeed-9a730cd323c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.095633 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.095690 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a57b4016-f4b5-4f01-aeed-9a730cd323c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.095704 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzdhb\" (UniqueName: \"kubernetes.io/projected/a57b4016-f4b5-4f01-aeed-9a730cd323c1-kube-api-access-bzdhb\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.182222 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.242063 4858 generic.go:334] "Generic (PLEG): container finished" podID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerID="041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" exitCode=0 Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.242142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hps49" event={"ID":"26fc1461-1071-4f74-9d54-4de6f9a268dc","Type":"ContainerDied","Data":"041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3"} Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.242183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hps49" event={"ID":"26fc1461-1071-4f74-9d54-4de6f9a268dc","Type":"ContainerDied","Data":"b03964bd4af5347b8be478005b4a868b2894f9544c83d389b074c594365406ff"} Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.242208 4858 scope.go:117] "RemoveContainer" containerID="041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.242391 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hps49" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.247066 4858 generic.go:334] "Generic (PLEG): container finished" podID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerID="d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0" exitCode=0 Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.247279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerDied","Data":"d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0"} Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.247402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rl5k9" event={"ID":"a57b4016-f4b5-4f01-aeed-9a730cd323c1","Type":"ContainerDied","Data":"3bfb75124f1e7466a45a3f3aa3487b5e7735efd5e609f8dc49570d434654b8ad"} Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.247344 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rl5k9" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.248908 4858 generic.go:334] "Generic (PLEG): container finished" podID="3e179bef-d974-45fa-abf8-dfe3901ae243" containerID="ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2" exitCode=0 Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.248973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" event={"ID":"3e179bef-d974-45fa-abf8-dfe3901ae243","Type":"ContainerDied","Data":"ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2"} Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.249014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" event={"ID":"3e179bef-d974-45fa-abf8-dfe3901ae243","Type":"ContainerDied","Data":"506d1688b9e34ce6cf0c16fef6bed6cf5247d265e55c9783eed334fb09dd5846"} Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.249096 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.268890 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hps49"] Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.282034 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hps49"] Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.283070 4858 scope.go:117] "RemoveContainer" containerID="2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.289964 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rl5k9"] Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.295390 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rl5k9"] Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.298811 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e179bef-d974-45fa-abf8-dfe3901ae243-serving-cert\") pod \"3e179bef-d974-45fa-abf8-dfe3901ae243\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.298923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wmg8\" (UniqueName: \"kubernetes.io/projected/3e179bef-d974-45fa-abf8-dfe3901ae243-kube-api-access-9wmg8\") pod \"3e179bef-d974-45fa-abf8-dfe3901ae243\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.299024 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-client-ca\") pod \"3e179bef-d974-45fa-abf8-dfe3901ae243\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.299069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-config\") pod \"3e179bef-d974-45fa-abf8-dfe3901ae243\" (UID: \"3e179bef-d974-45fa-abf8-dfe3901ae243\") " Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.300291 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e179bef-d974-45fa-abf8-dfe3901ae243" (UID: "3e179bef-d974-45fa-abf8-dfe3901ae243"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.300348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-config" (OuterVolumeSpecName: "config") pod "3e179bef-d974-45fa-abf8-dfe3901ae243" (UID: "3e179bef-d974-45fa-abf8-dfe3901ae243"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.302910 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e179bef-d974-45fa-abf8-dfe3901ae243-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e179bef-d974-45fa-abf8-dfe3901ae243" (UID: "3e179bef-d974-45fa-abf8-dfe3901ae243"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.303114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e179bef-d974-45fa-abf8-dfe3901ae243-kube-api-access-9wmg8" (OuterVolumeSpecName: "kube-api-access-9wmg8") pod "3e179bef-d974-45fa-abf8-dfe3901ae243" (UID: "3e179bef-d974-45fa-abf8-dfe3901ae243"). InnerVolumeSpecName "kube-api-access-9wmg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.304500 4858 scope.go:117] "RemoveContainer" containerID="27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.322277 4858 scope.go:117] "RemoveContainer" containerID="041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" Jan 27 20:14:52 crc kubenswrapper[4858]: E0127 20:14:52.323145 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3\": container with ID starting with 041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3 not found: ID does not exist" containerID="041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.323202 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3"} err="failed to get container status \"041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3\": rpc error: code = NotFound desc = could not find container \"041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3\": container with ID starting with 041717f474a2a689972451edc78e4502b0d8daf68d5b878e94d3f0138a4f47a3 not found: ID does not exist" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.323243 4858 scope.go:117] "RemoveContainer" containerID="2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1" Jan 27 20:14:52 crc kubenswrapper[4858]: E0127 20:14:52.323629 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1\": container with ID starting with 2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1 not found: ID does not exist" containerID="2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.323666 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1"} err="failed to get container status \"2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1\": rpc error: code = NotFound desc = could not find container \"2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1\": container with ID starting with 2174a284212b718f9c8d720cdb3dd58ed7d5915d9cf66f800fcbbfd5816c33e1 not found: ID does not exist" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.323687 4858 scope.go:117] "RemoveContainer" containerID="27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f" Jan 27 20:14:52 crc kubenswrapper[4858]: E0127 20:14:52.325369 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f\": container with ID starting with 27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f not found: ID does not exist" containerID="27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.325398 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f"} err="failed to get container status \"27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f\": rpc error: code = NotFound desc = could not find container \"27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f\": container with ID starting with 27a905278f779361b832714d40621e24a9c81aab6b61f4d87fcfe80a27eb8e4f not found: ID does not exist" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.325415 4858 scope.go:117] "RemoveContainer" containerID="d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.376993 4858 scope.go:117] "RemoveContainer" containerID="7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.399994 4858 scope.go:117] "RemoveContainer" containerID="3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.400973 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e179bef-d974-45fa-abf8-dfe3901ae243-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.401012 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wmg8\" (UniqueName: \"kubernetes.io/projected/3e179bef-d974-45fa-abf8-dfe3901ae243-kube-api-access-9wmg8\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.401027 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.401041 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e179bef-d974-45fa-abf8-dfe3901ae243-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.417245 4858 scope.go:117] "RemoveContainer" containerID="d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0" Jan 27 20:14:52 crc kubenswrapper[4858]: E0127 20:14:52.417908 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0\": container with ID starting with d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0 not found: ID does not exist" containerID="d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.417964 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0"} err="failed to get container status \"d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0\": rpc error: code = NotFound desc = could not find container \"d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0\": container with ID starting with d98324c23116de1914ceaf800e7c57fcaaed2366359984d42a4ba0c909a370a0 not found: ID does not exist" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.418008 4858 scope.go:117] "RemoveContainer" containerID="7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5" Jan 27 20:14:52 crc kubenswrapper[4858]: E0127 20:14:52.418700 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5\": container with ID starting with 7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5 not found: ID does not exist" containerID="7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.418744 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5"} err="failed to get container status \"7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5\": rpc error: code = NotFound desc = could not find container \"7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5\": container with ID starting with 7bfe14de2b77ac1529774f166c646ffb39cb30801ec8c981fcfce601c7341ef5 not found: ID does not exist" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.418771 4858 scope.go:117] "RemoveContainer" containerID="3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb" Jan 27 20:14:52 crc kubenswrapper[4858]: E0127 20:14:52.419171 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb\": container with ID starting with 3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb not found: ID does not exist" containerID="3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.419218 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb"} err="failed to get container status \"3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb\": rpc error: code = NotFound desc = could not find container \"3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb\": container with ID starting with 3f902f21e7d88e9809678713a44458910361a9ae40d9672c59b2fdf3accca3cb not found: ID does not exist" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.419246 4858 scope.go:117] "RemoveContainer" containerID="ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.441019 4858 scope.go:117] "RemoveContainer" containerID="ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2" Jan 27 20:14:52 crc kubenswrapper[4858]: E0127 20:14:52.441749 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2\": container with ID starting with ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2 not found: ID does not exist" containerID="ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.441846 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2"} err="failed to get container status \"ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2\": rpc error: code = NotFound desc = could not find container \"ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2\": container with ID starting with ba38c15d32972539661ebd6e098e326ab434ef437ed6f1301d8f784c9afbcbb2 not found: ID does not exist" Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.602376 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl"] Jan 27 20:14:52 crc kubenswrapper[4858]: I0127 20:14:52.607078 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-645d5c8f55-4tzcl"] Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131141 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7"] Jan 27 20:14:53 crc kubenswrapper[4858]: E0127 20:14:53.131726 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="extract-content" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131765 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="extract-content" Jan 27 20:14:53 crc kubenswrapper[4858]: E0127 20:14:53.131786 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="extract-content" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131796 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="extract-content" Jan 27 20:14:53 crc kubenswrapper[4858]: E0127 20:14:53.131814 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="extract-utilities" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131826 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="extract-utilities" Jan 27 20:14:53 crc kubenswrapper[4858]: E0127 20:14:53.131836 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="registry-server" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131845 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="registry-server" Jan 27 20:14:53 crc kubenswrapper[4858]: E0127 20:14:53.131859 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="extract-utilities" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131868 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="extract-utilities" Jan 27 20:14:53 crc kubenswrapper[4858]: E0127 20:14:53.131881 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="registry-server" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131889 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="registry-server" Jan 27 20:14:53 crc kubenswrapper[4858]: E0127 20:14:53.131905 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e179bef-d974-45fa-abf8-dfe3901ae243" containerName="route-controller-manager" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.131914 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e179bef-d974-45fa-abf8-dfe3901ae243" containerName="route-controller-manager" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.132049 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e179bef-d974-45fa-abf8-dfe3901ae243" containerName="route-controller-manager" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.132066 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" containerName="registry-server" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.132086 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" containerName="registry-server" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.132870 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.142220 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.142404 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.142602 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.143018 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.143535 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.144280 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.153762 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7"] Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.212781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vtzd\" (UniqueName: \"kubernetes.io/projected/40cb8809-1b8e-4218-9216-b3f5009e58b3-kube-api-access-6vtzd\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.212974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40cb8809-1b8e-4218-9216-b3f5009e58b3-client-ca\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.213041 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40cb8809-1b8e-4218-9216-b3f5009e58b3-serving-cert\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.213078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40cb8809-1b8e-4218-9216-b3f5009e58b3-config\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.315074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vtzd\" (UniqueName: \"kubernetes.io/projected/40cb8809-1b8e-4218-9216-b3f5009e58b3-kube-api-access-6vtzd\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.315170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40cb8809-1b8e-4218-9216-b3f5009e58b3-client-ca\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.315217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40cb8809-1b8e-4218-9216-b3f5009e58b3-serving-cert\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.315244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40cb8809-1b8e-4218-9216-b3f5009e58b3-config\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.317100 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40cb8809-1b8e-4218-9216-b3f5009e58b3-client-ca\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.317303 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40cb8809-1b8e-4218-9216-b3f5009e58b3-config\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.323042 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40cb8809-1b8e-4218-9216-b3f5009e58b3-serving-cert\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.336131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vtzd\" (UniqueName: \"kubernetes.io/projected/40cb8809-1b8e-4218-9216-b3f5009e58b3-kube-api-access-6vtzd\") pod \"route-controller-manager-85b87685bb-mrcm7\" (UID: \"40cb8809-1b8e-4218-9216-b3f5009e58b3\") " pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.456290 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.631467 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2r5qs"] Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.633078 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2r5qs" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="registry-server" containerID="cri-o://3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4" gracePeriod=2 Jan 27 20:14:53 crc kubenswrapper[4858]: I0127 20:14:53.706400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7"] Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.031094 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.079871 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26fc1461-1071-4f74-9d54-4de6f9a268dc" path="/var/lib/kubelet/pods/26fc1461-1071-4f74-9d54-4de6f9a268dc/volumes" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.080536 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e179bef-d974-45fa-abf8-dfe3901ae243" path="/var/lib/kubelet/pods/3e179bef-d974-45fa-abf8-dfe3901ae243/volumes" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.081269 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a57b4016-f4b5-4f01-aeed-9a730cd323c1" path="/var/lib/kubelet/pods/a57b4016-f4b5-4f01-aeed-9a730cd323c1/volumes" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.128962 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-utilities\") pod \"da279f23-0e34-40de-9b49-f325361ce0ff\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.129108 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmd5b\" (UniqueName: \"kubernetes.io/projected/da279f23-0e34-40de-9b49-f325361ce0ff-kube-api-access-mmd5b\") pod \"da279f23-0e34-40de-9b49-f325361ce0ff\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.129131 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-catalog-content\") pod \"da279f23-0e34-40de-9b49-f325361ce0ff\" (UID: \"da279f23-0e34-40de-9b49-f325361ce0ff\") " Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.129966 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-utilities" (OuterVolumeSpecName: "utilities") pod "da279f23-0e34-40de-9b49-f325361ce0ff" (UID: "da279f23-0e34-40de-9b49-f325361ce0ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.138107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da279f23-0e34-40de-9b49-f325361ce0ff-kube-api-access-mmd5b" (OuterVolumeSpecName: "kube-api-access-mmd5b") pod "da279f23-0e34-40de-9b49-f325361ce0ff" (UID: "da279f23-0e34-40de-9b49-f325361ce0ff"). InnerVolumeSpecName "kube-api-access-mmd5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.184415 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da279f23-0e34-40de-9b49-f325361ce0ff" (UID: "da279f23-0e34-40de-9b49-f325361ce0ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.231432 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.231503 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmd5b\" (UniqueName: \"kubernetes.io/projected/da279f23-0e34-40de-9b49-f325361ce0ff-kube-api-access-mmd5b\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.231523 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da279f23-0e34-40de-9b49-f325361ce0ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.272206 4858 generic.go:334] "Generic (PLEG): container finished" podID="da279f23-0e34-40de-9b49-f325361ce0ff" containerID="3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4" exitCode=0 Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.272307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5qs" event={"ID":"da279f23-0e34-40de-9b49-f325361ce0ff","Type":"ContainerDied","Data":"3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4"} Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.272354 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2r5qs" event={"ID":"da279f23-0e34-40de-9b49-f325361ce0ff","Type":"ContainerDied","Data":"7ee2aa5a351048ada1cec8c7c43360a40e8a7e28ec064a3b69626667ee931686"} Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.272376 4858 scope.go:117] "RemoveContainer" containerID="3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.272613 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2r5qs" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.278107 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" event={"ID":"40cb8809-1b8e-4218-9216-b3f5009e58b3","Type":"ContainerStarted","Data":"1699c0e584909cc6a4ece98dae0c261d169be1222d662578b8612efaf2d77298"} Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.278145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" event={"ID":"40cb8809-1b8e-4218-9216-b3f5009e58b3","Type":"ContainerStarted","Data":"608894282a4c0d838398b586cead17c72ce1cb25fb32808375e94e3abee17ed1"} Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.278459 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.297018 4858 scope.go:117] "RemoveContainer" containerID="8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.310282 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" podStartSLOduration=3.310254104 podStartE2EDuration="3.310254104s" podCreationTimestamp="2026-01-27 20:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:14:54.305370978 +0000 UTC m=+439.013186684" watchObservedRunningTime="2026-01-27 20:14:54.310254104 +0000 UTC m=+439.018069810" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.326342 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2r5qs"] Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.328285 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2r5qs"] Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.334053 4858 scope.go:117] "RemoveContainer" containerID="5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.358458 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85b87685bb-mrcm7" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.367526 4858 scope.go:117] "RemoveContainer" containerID="3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4" Jan 27 20:14:54 crc kubenswrapper[4858]: E0127 20:14:54.368314 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4\": container with ID starting with 3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4 not found: ID does not exist" containerID="3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.368426 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4"} err="failed to get container status \"3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4\": rpc error: code = NotFound desc = could not find container \"3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4\": container with ID starting with 3e9757943174b0f4be6c6e0d2d492b964125f1918ffa347861aa4a93e0808cf4 not found: ID does not exist" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.368535 4858 scope.go:117] "RemoveContainer" containerID="8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974" Jan 27 20:14:54 crc kubenswrapper[4858]: E0127 20:14:54.371638 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974\": container with ID starting with 8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974 not found: ID does not exist" containerID="8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.371702 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974"} err="failed to get container status \"8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974\": rpc error: code = NotFound desc = could not find container \"8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974\": container with ID starting with 8b750f8bd0addae3c6bfa364bf61ce12b88ca226529cae678e9e3ddb9e4bd974 not found: ID does not exist" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.371744 4858 scope.go:117] "RemoveContainer" containerID="5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d" Jan 27 20:14:54 crc kubenswrapper[4858]: E0127 20:14:54.372167 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d\": container with ID starting with 5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d not found: ID does not exist" containerID="5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d" Jan 27 20:14:54 crc kubenswrapper[4858]: I0127 20:14:54.372217 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d"} err="failed to get container status \"5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d\": rpc error: code = NotFound desc = could not find container \"5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d\": container with ID starting with 5e8819bd26ca0e6dc17464dd2e96af1123367e7bf6e991ff6e45f7df208b678d not found: ID does not exist" Jan 27 20:14:56 crc kubenswrapper[4858]: I0127 20:14:56.080104 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" path="/var/lib/kubelet/pods/da279f23-0e34-40de-9b49-f325361ce0ff/volumes" Jan 27 20:14:58 crc kubenswrapper[4858]: I0127 20:14:58.970354 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4jtm"] Jan 27 20:14:58 crc kubenswrapper[4858]: I0127 20:14:58.971322 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4jtm" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="registry-server" containerID="cri-o://b24938f7a772f87c3143bc4100b6b6909a156d798e437308562fc3fbaa1da07c" gracePeriod=30 Jan 27 20:14:58 crc kubenswrapper[4858]: I0127 20:14:58.980509 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9vrj"] Jan 27 20:14:58 crc kubenswrapper[4858]: I0127 20:14:58.980798 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b9vrj" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="registry-server" containerID="cri-o://0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab" gracePeriod=30 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.002646 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtjt"] Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.002904 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" containerID="cri-o://8d0105146cfe7d4576dbcf760ad1195e4cabdd2a2738d03ea670f9e227012eda" gracePeriod=30 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.023040 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p68jw"] Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.023360 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p68jw" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="registry-server" containerID="cri-o://9ce77be1574f0d928284a55cd7191e956caa7d296dd3fc2b9e8575e2bbceb4b1" gracePeriod=30 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.027956 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gnzjf"] Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.028260 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gnzjf" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="registry-server" containerID="cri-o://b66b491bddb3ac20e20722e0414ab4c8df70231c97a5895147e3062d585856c7" gracePeriod=30 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.038356 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qlw92"] Jan 27 20:14:59 crc kubenswrapper[4858]: E0127 20:14:59.038681 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="registry-server" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.038706 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="registry-server" Jan 27 20:14:59 crc kubenswrapper[4858]: E0127 20:14:59.038725 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="extract-utilities" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.038733 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="extract-utilities" Jan 27 20:14:59 crc kubenswrapper[4858]: E0127 20:14:59.038745 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="extract-content" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.038752 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="extract-content" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.038876 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="da279f23-0e34-40de-9b49-f325361ce0ff" containerName="registry-server" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.039318 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.042884 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qlw92"] Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.094328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35d290ca-2486-41c6-9a0e-0b905e2994bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.094396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35d290ca-2486-41c6-9a0e-0b905e2994bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.094421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l76q4\" (UniqueName: \"kubernetes.io/projected/35d290ca-2486-41c6-9a0e-0b905e2994bb-kube-api-access-l76q4\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.196215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35d290ca-2486-41c6-9a0e-0b905e2994bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.196278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35d290ca-2486-41c6-9a0e-0b905e2994bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.196301 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l76q4\" (UniqueName: \"kubernetes.io/projected/35d290ca-2486-41c6-9a0e-0b905e2994bb-kube-api-access-l76q4\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.199741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/35d290ca-2486-41c6-9a0e-0b905e2994bb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.208051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/35d290ca-2486-41c6-9a0e-0b905e2994bb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.233104 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l76q4\" (UniqueName: \"kubernetes.io/projected/35d290ca-2486-41c6-9a0e-0b905e2994bb-kube-api-access-l76q4\") pod \"marketplace-operator-79b997595-qlw92\" (UID: \"35d290ca-2486-41c6-9a0e-0b905e2994bb\") " pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: E0127 20:14:59.239848 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab is running failed: container process not found" containerID="0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 20:14:59 crc kubenswrapper[4858]: E0127 20:14:59.240292 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab is running failed: container process not found" containerID="0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 20:14:59 crc kubenswrapper[4858]: E0127 20:14:59.241992 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab is running failed: container process not found" containerID="0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 20:14:59 crc kubenswrapper[4858]: E0127 20:14:59.242033 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-b9vrj" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="registry-server" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.310878 4858 generic.go:334] "Generic (PLEG): container finished" podID="471132af-0b76-4c4a-8560-deedd9d3381b" containerID="9ce77be1574f0d928284a55cd7191e956caa7d296dd3fc2b9e8575e2bbceb4b1" exitCode=0 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.310938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p68jw" event={"ID":"471132af-0b76-4c4a-8560-deedd9d3381b","Type":"ContainerDied","Data":"9ce77be1574f0d928284a55cd7191e956caa7d296dd3fc2b9e8575e2bbceb4b1"} Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.312836 4858 generic.go:334] "Generic (PLEG): container finished" podID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerID="8d0105146cfe7d4576dbcf760ad1195e4cabdd2a2738d03ea670f9e227012eda" exitCode=0 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.312883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" event={"ID":"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d","Type":"ContainerDied","Data":"8d0105146cfe7d4576dbcf760ad1195e4cabdd2a2738d03ea670f9e227012eda"} Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.312909 4858 scope.go:117] "RemoveContainer" containerID="efb59333516b70197bce8799ad8c5a0a47720e9ba044fff40ce02cf45e14988e" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.315697 4858 generic.go:334] "Generic (PLEG): container finished" podID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerID="0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab" exitCode=0 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.315748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9vrj" event={"ID":"9cdbabda-bda6-438a-a671-0f15b0ad57c0","Type":"ContainerDied","Data":"0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab"} Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.317630 4858 generic.go:334] "Generic (PLEG): container finished" podID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerID="b66b491bddb3ac20e20722e0414ab4c8df70231c97a5895147e3062d585856c7" exitCode=0 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.317703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnzjf" event={"ID":"ad57fc45-ce61-4d62-adb4-2a655f77e751","Type":"ContainerDied","Data":"b66b491bddb3ac20e20722e0414ab4c8df70231c97a5895147e3062d585856c7"} Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.321574 4858 generic.go:334] "Generic (PLEG): container finished" podID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerID="b24938f7a772f87c3143bc4100b6b6909a156d798e437308562fc3fbaa1da07c" exitCode=0 Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.321599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerDied","Data":"b24938f7a772f87c3143bc4100b6b6909a156d798e437308562fc3fbaa1da07c"} Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.383094 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.398298 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.499231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-catalog-content\") pod \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.499353 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7cqg\" (UniqueName: \"kubernetes.io/projected/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-kube-api-access-w7cqg\") pod \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.499400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-utilities\") pod \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\" (UID: \"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.501261 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-utilities" (OuterVolumeSpecName: "utilities") pod "405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" (UID: "405f7c13-54ae-46fa-99c1-7c8a61c2f3bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.505885 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-kube-api-access-w7cqg" (OuterVolumeSpecName: "kube-api-access-w7cqg") pod "405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" (UID: "405f7c13-54ae-46fa-99c1-7c8a61c2f3bc"). InnerVolumeSpecName "kube-api-access-w7cqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.570063 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.573637 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" (UID: "405f7c13-54ae-46fa-99c1-7c8a61c2f3bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.583145 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.601132 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7cqg\" (UniqueName: \"kubernetes.io/projected/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-kube-api-access-w7cqg\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.601161 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.601172 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.606376 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.649071 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-catalog-content\") pod \"ad57fc45-ce61-4d62-adb4-2a655f77e751\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-catalog-content\") pod \"471132af-0b76-4c4a-8560-deedd9d3381b\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-catalog-content\") pod \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702672 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hlkt\" (UniqueName: \"kubernetes.io/projected/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-kube-api-access-4hlkt\") pod \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702715 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skwhn\" (UniqueName: \"kubernetes.io/projected/ad57fc45-ce61-4d62-adb4-2a655f77e751-kube-api-access-skwhn\") pod \"ad57fc45-ce61-4d62-adb4-2a655f77e751\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702739 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5c9d\" (UniqueName: \"kubernetes.io/projected/471132af-0b76-4c4a-8560-deedd9d3381b-kube-api-access-j5c9d\") pod \"471132af-0b76-4c4a-8560-deedd9d3381b\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702761 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-trusted-ca\") pod \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702788 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-operator-metrics\") pod \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\" (UID: \"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702812 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-utilities\") pod \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702833 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-utilities\") pod \"471132af-0b76-4c4a-8560-deedd9d3381b\" (UID: \"471132af-0b76-4c4a-8560-deedd9d3381b\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702861 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcc4s\" (UniqueName: \"kubernetes.io/projected/9cdbabda-bda6-438a-a671-0f15b0ad57c0-kube-api-access-hcc4s\") pod \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\" (UID: \"9cdbabda-bda6-438a-a671-0f15b0ad57c0\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.702885 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-utilities\") pod \"ad57fc45-ce61-4d62-adb4-2a655f77e751\" (UID: \"ad57fc45-ce61-4d62-adb4-2a655f77e751\") " Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.703950 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-utilities" (OuterVolumeSpecName: "utilities") pod "ad57fc45-ce61-4d62-adb4-2a655f77e751" (UID: "ad57fc45-ce61-4d62-adb4-2a655f77e751"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.704140 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-utilities" (OuterVolumeSpecName: "utilities") pod "9cdbabda-bda6-438a-a671-0f15b0ad57c0" (UID: "9cdbabda-bda6-438a-a671-0f15b0ad57c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.704171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-utilities" (OuterVolumeSpecName: "utilities") pod "471132af-0b76-4c4a-8560-deedd9d3381b" (UID: "471132af-0b76-4c4a-8560-deedd9d3381b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.704293 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" (UID: "4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.707434 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad57fc45-ce61-4d62-adb4-2a655f77e751-kube-api-access-skwhn" (OuterVolumeSpecName: "kube-api-access-skwhn") pod "ad57fc45-ce61-4d62-adb4-2a655f77e751" (UID: "ad57fc45-ce61-4d62-adb4-2a655f77e751"). InnerVolumeSpecName "kube-api-access-skwhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.708485 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/471132af-0b76-4c4a-8560-deedd9d3381b-kube-api-access-j5c9d" (OuterVolumeSpecName: "kube-api-access-j5c9d") pod "471132af-0b76-4c4a-8560-deedd9d3381b" (UID: "471132af-0b76-4c4a-8560-deedd9d3381b"). InnerVolumeSpecName "kube-api-access-j5c9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.718116 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" (UID: "4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.718518 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-kube-api-access-4hlkt" (OuterVolumeSpecName: "kube-api-access-4hlkt") pod "4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" (UID: "4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d"). InnerVolumeSpecName "kube-api-access-4hlkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.719980 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdbabda-bda6-438a-a671-0f15b0ad57c0-kube-api-access-hcc4s" (OuterVolumeSpecName: "kube-api-access-hcc4s") pod "9cdbabda-bda6-438a-a671-0f15b0ad57c0" (UID: "9cdbabda-bda6-438a-a671-0f15b0ad57c0"). InnerVolumeSpecName "kube-api-access-hcc4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.743406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "471132af-0b76-4c4a-8560-deedd9d3381b" (UID: "471132af-0b76-4c4a-8560-deedd9d3381b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.770752 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cdbabda-bda6-438a-a671-0f15b0ad57c0" (UID: "9cdbabda-bda6-438a-a671-0f15b0ad57c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.803966 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skwhn\" (UniqueName: \"kubernetes.io/projected/ad57fc45-ce61-4d62-adb4-2a655f77e751-kube-api-access-skwhn\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804234 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5c9d\" (UniqueName: \"kubernetes.io/projected/471132af-0b76-4c4a-8560-deedd9d3381b-kube-api-access-j5c9d\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804317 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804389 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804565 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804646 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804718 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcc4s\" (UniqueName: \"kubernetes.io/projected/9cdbabda-bda6-438a-a671-0f15b0ad57c0-kube-api-access-hcc4s\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804783 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804846 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471132af-0b76-4c4a-8560-deedd9d3381b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804903 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cdbabda-bda6-438a-a671-0f15b0ad57c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.804956 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hlkt\" (UniqueName: \"kubernetes.io/projected/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d-kube-api-access-4hlkt\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.834163 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad57fc45-ce61-4d62-adb4-2a655f77e751" (UID: "ad57fc45-ce61-4d62-adb4-2a655f77e751"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.906669 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad57fc45-ce61-4d62-adb4-2a655f77e751-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:14:59 crc kubenswrapper[4858]: I0127 20:14:59.932626 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qlw92"] Jan 27 20:14:59 crc kubenswrapper[4858]: W0127 20:14:59.941382 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d290ca_2486_41c6_9a0e_0b905e2994bb.slice/crio-3d7312ec006d8cdf9a7cbb7b3efe946ca15dccde4326233b3c8534756afd92b3 WatchSource:0}: Error finding container 3d7312ec006d8cdf9a7cbb7b3efe946ca15dccde4326233b3c8534756afd92b3: Status 404 returned error can't find the container with id 3d7312ec006d8cdf9a7cbb7b3efe946ca15dccde4326233b3c8534756afd92b3 Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181108 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj"] Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181380 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181397 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181409 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181416 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181427 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181434 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181444 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181450 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181458 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181463 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181474 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181479 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181487 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181493 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181503 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181509 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181516 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181522 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181529 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181535 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181542 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181566 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181578 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181586 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="extract-utilities" Jan 27 20:15:00 crc kubenswrapper[4858]: E0127 20:15:00.181597 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181603 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="extract-content" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181700 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181719 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181734 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181742 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181751 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.181761 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" containerName="registry-server" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.182211 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.184472 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.184619 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.194593 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.311195 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpd8c\" (UniqueName: \"kubernetes.io/projected/120a892c-adc8-488d-91e7-3c76b47af2fb-kube-api-access-wpd8c\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.311317 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/120a892c-adc8-488d-91e7-3c76b47af2fb-config-volume\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.311361 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/120a892c-adc8-488d-91e7-3c76b47af2fb-secret-volume\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.331975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnzjf" event={"ID":"ad57fc45-ce61-4d62-adb4-2a655f77e751","Type":"ContainerDied","Data":"64e1292b403c673354237bd33a182b093617c923564517838df96d91f1cea25f"} Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.332012 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnzjf" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.332117 4858 scope.go:117] "RemoveContainer" containerID="b66b491bddb3ac20e20722e0414ab4c8df70231c97a5895147e3062d585856c7" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.338876 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4jtm" event={"ID":"405f7c13-54ae-46fa-99c1-7c8a61c2f3bc","Type":"ContainerDied","Data":"ddf909e950a05f7d76440119014b4d10f9a9569d15de226233e901015e8a7662"} Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.339040 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4jtm" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.343417 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p68jw" event={"ID":"471132af-0b76-4c4a-8560-deedd9d3381b","Type":"ContainerDied","Data":"b0fd166eba88b2fe2f219f4341a0cb796c6f8047f5c58d572bfc7430531c7f64"} Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.343444 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p68jw" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.345046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" event={"ID":"4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d","Type":"ContainerDied","Data":"1ee21ff5ab0d50e14ca7eff9307e1854d34764ba8f023df85ff9e0e98357f6de"} Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.345102 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5wtjt" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.350266 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" event={"ID":"35d290ca-2486-41c6-9a0e-0b905e2994bb","Type":"ContainerStarted","Data":"7246c05ce3b00f9f23d5367b7ef9252c6f15aa270b4f81768afe844185b7e9e5"} Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.350319 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" event={"ID":"35d290ca-2486-41c6-9a0e-0b905e2994bb","Type":"ContainerStarted","Data":"3d7312ec006d8cdf9a7cbb7b3efe946ca15dccde4326233b3c8534756afd92b3"} Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.353829 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gnzjf"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.354673 4858 scope.go:117] "RemoveContainer" containerID="d9d77eb2216a249ab7435d27a44c2f2d153b4cc7b5eb38fa29089edc33a092a7" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.356961 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gnzjf"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.357087 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b9vrj" event={"ID":"9cdbabda-bda6-438a-a671-0f15b0ad57c0","Type":"ContainerDied","Data":"54d418b54f6d1ead93dfaf6c91b96200729e2bb6a2fdf8e414ab55a3c3de6298"} Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.357134 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b9vrj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.374552 4858 scope.go:117] "RemoveContainer" containerID="e1361cc076754f188cd1b18d242748ceb380025b17c1ba6ba90adebe607eb089" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.376074 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p68jw"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.381813 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p68jw"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.391428 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" podStartSLOduration=1.3914088869999999 podStartE2EDuration="1.391408887s" podCreationTimestamp="2026-01-27 20:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:15:00.390768767 +0000 UTC m=+445.098584493" watchObservedRunningTime="2026-01-27 20:15:00.391408887 +0000 UTC m=+445.099224593" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.406340 4858 scope.go:117] "RemoveContainer" containerID="b24938f7a772f87c3143bc4100b6b6909a156d798e437308562fc3fbaa1da07c" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.414729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/120a892c-adc8-488d-91e7-3c76b47af2fb-secret-volume\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.414944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpd8c\" (UniqueName: \"kubernetes.io/projected/120a892c-adc8-488d-91e7-3c76b47af2fb-kube-api-access-wpd8c\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.415024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/120a892c-adc8-488d-91e7-3c76b47af2fb-config-volume\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.416408 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/120a892c-adc8-488d-91e7-3c76b47af2fb-config-volume\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.421874 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4jtm"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.430336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/120a892c-adc8-488d-91e7-3c76b47af2fb-secret-volume\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.435098 4858 scope.go:117] "RemoveContainer" containerID="2600c0a02c2137bc337c925ed9c2af54b977e6f8540ea2fa73cf5229121fdc13" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.436048 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4jtm"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.443395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpd8c\" (UniqueName: \"kubernetes.io/projected/120a892c-adc8-488d-91e7-3c76b47af2fb-kube-api-access-wpd8c\") pod \"collect-profiles-29492415-rjcqj\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.446077 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b9vrj"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.449148 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b9vrj"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.451973 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtjt"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.454551 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5wtjt"] Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.488502 4858 scope.go:117] "RemoveContainer" containerID="f56c1c0fda53b78bb9cea1303a29c2206b2538894952b9d84d118a4a0215ed7a" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.505434 4858 scope.go:117] "RemoveContainer" containerID="9ce77be1574f0d928284a55cd7191e956caa7d296dd3fc2b9e8575e2bbceb4b1" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.509352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.526276 4858 scope.go:117] "RemoveContainer" containerID="304a0da64956f115429c2092b3c2e57f8858c6aa349a00a6b61f530d7a0dac49" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.544197 4858 scope.go:117] "RemoveContainer" containerID="79344889d771d89a68ad5936af47e8b5245d725f9edea3ac40a7a35c9a42c153" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.559928 4858 scope.go:117] "RemoveContainer" containerID="8d0105146cfe7d4576dbcf760ad1195e4cabdd2a2738d03ea670f9e227012eda" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.573826 4858 scope.go:117] "RemoveContainer" containerID="0cf52353bcd410874368c8357627e8c71701369836fcb80245f707e56c82c8ab" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.587499 4858 scope.go:117] "RemoveContainer" containerID="c0b481b3a0dd98b88784c0ca344a5ae35de1d5418a11c93df208e78df407073b" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.603234 4858 scope.go:117] "RemoveContainer" containerID="80e3aab9b212016ff4ea276f584dc450ff24bf8be30a06d755e31496640d91e9" Jan 27 20:15:00 crc kubenswrapper[4858]: I0127 20:15:00.738990 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj"] Jan 27 20:15:00 crc kubenswrapper[4858]: W0127 20:15:00.744472 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod120a892c_adc8_488d_91e7_3c76b47af2fb.slice/crio-88b7f7536a7eb938d8e94744abf1b0d4320fa44761da436ddd692715d7d34069 WatchSource:0}: Error finding container 88b7f7536a7eb938d8e94744abf1b0d4320fa44761da436ddd692715d7d34069: Status 404 returned error can't find the container with id 88b7f7536a7eb938d8e94744abf1b0d4320fa44761da436ddd692715d7d34069 Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.369223 4858 generic.go:334] "Generic (PLEG): container finished" podID="120a892c-adc8-488d-91e7-3c76b47af2fb" containerID="ffcacacbea21b164f70d9b3388c5d22f4b4efaef049aff948c3b43dccbf312e7" exitCode=0 Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.369276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" event={"ID":"120a892c-adc8-488d-91e7-3c76b47af2fb","Type":"ContainerDied","Data":"ffcacacbea21b164f70d9b3388c5d22f4b4efaef049aff948c3b43dccbf312e7"} Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.369696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" event={"ID":"120a892c-adc8-488d-91e7-3c76b47af2fb","Type":"ContainerStarted","Data":"88b7f7536a7eb938d8e94744abf1b0d4320fa44761da436ddd692715d7d34069"} Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.370113 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.372740 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qlw92" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.833007 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6md42"] Jan 27 20:15:01 crc kubenswrapper[4858]: E0127 20:15:01.833461 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.833539 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" containerName="marketplace-operator" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.834363 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.836339 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.848782 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6md42"] Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.936256 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-catalog-content\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.936297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-utilities\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:01 crc kubenswrapper[4858]: I0127 20:15:01.936337 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr5wv\" (UniqueName: \"kubernetes.io/projected/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-kube-api-access-lr5wv\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.037689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr5wv\" (UniqueName: \"kubernetes.io/projected/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-kube-api-access-lr5wv\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.037833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-catalog-content\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.037870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-utilities\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.038755 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-utilities\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.038757 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-catalog-content\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.039747 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jrqvn"] Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.040905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.043269 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.054190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrqvn"] Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.061235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr5wv\" (UniqueName: \"kubernetes.io/projected/e59c3191-f721-45cc-b1d2-2e6bd8bc0797-kube-api-access-lr5wv\") pod \"redhat-marketplace-6md42\" (UID: \"e59c3191-f721-45cc-b1d2-2e6bd8bc0797\") " pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.086519 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="405f7c13-54ae-46fa-99c1-7c8a61c2f3bc" path="/var/lib/kubelet/pods/405f7c13-54ae-46fa-99c1-7c8a61c2f3bc/volumes" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.087322 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="471132af-0b76-4c4a-8560-deedd9d3381b" path="/var/lib/kubelet/pods/471132af-0b76-4c4a-8560-deedd9d3381b/volumes" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.088190 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d" path="/var/lib/kubelet/pods/4f6cf7fc-5cd0-4b28-992c-41a0e8526f4d/volumes" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.089229 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdbabda-bda6-438a-a671-0f15b0ad57c0" path="/var/lib/kubelet/pods/9cdbabda-bda6-438a-a671-0f15b0ad57c0/volumes" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.089951 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad57fc45-ce61-4d62-adb4-2a655f77e751" path="/var/lib/kubelet/pods/ad57fc45-ce61-4d62-adb4-2a655f77e751/volumes" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.139231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfll4\" (UniqueName: \"kubernetes.io/projected/9ce09890-416b-4b69-87f8-5a695f3c2ce8-kube-api-access-zfll4\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.139282 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-catalog-content\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.139541 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-utilities\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.157189 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.241024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfll4\" (UniqueName: \"kubernetes.io/projected/9ce09890-416b-4b69-87f8-5a695f3c2ce8-kube-api-access-zfll4\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.241072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-catalog-content\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.241148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-utilities\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.241779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-catalog-content\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.241872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-utilities\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.265033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfll4\" (UniqueName: \"kubernetes.io/projected/9ce09890-416b-4b69-87f8-5a695f3c2ce8-kube-api-access-zfll4\") pod \"certified-operators-jrqvn\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.356984 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.562529 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6md42"] Jan 27 20:15:02 crc kubenswrapper[4858]: W0127 20:15:02.570117 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode59c3191_f721_45cc_b1d2_2e6bd8bc0797.slice/crio-b30edbae93ac420631f1ecdd7fbde58216defeaa9cc104b56c55a7402739df79 WatchSource:0}: Error finding container b30edbae93ac420631f1ecdd7fbde58216defeaa9cc104b56c55a7402739df79: Status 404 returned error can't find the container with id b30edbae93ac420631f1ecdd7fbde58216defeaa9cc104b56c55a7402739df79 Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.597362 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.645920 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpd8c\" (UniqueName: \"kubernetes.io/projected/120a892c-adc8-488d-91e7-3c76b47af2fb-kube-api-access-wpd8c\") pod \"120a892c-adc8-488d-91e7-3c76b47af2fb\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.645964 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/120a892c-adc8-488d-91e7-3c76b47af2fb-config-volume\") pod \"120a892c-adc8-488d-91e7-3c76b47af2fb\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.646078 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/120a892c-adc8-488d-91e7-3c76b47af2fb-secret-volume\") pod \"120a892c-adc8-488d-91e7-3c76b47af2fb\" (UID: \"120a892c-adc8-488d-91e7-3c76b47af2fb\") " Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.647254 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/120a892c-adc8-488d-91e7-3c76b47af2fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "120a892c-adc8-488d-91e7-3c76b47af2fb" (UID: "120a892c-adc8-488d-91e7-3c76b47af2fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.649918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/120a892c-adc8-488d-91e7-3c76b47af2fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "120a892c-adc8-488d-91e7-3c76b47af2fb" (UID: "120a892c-adc8-488d-91e7-3c76b47af2fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.650057 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120a892c-adc8-488d-91e7-3c76b47af2fb-kube-api-access-wpd8c" (OuterVolumeSpecName: "kube-api-access-wpd8c") pod "120a892c-adc8-488d-91e7-3c76b47af2fb" (UID: "120a892c-adc8-488d-91e7-3c76b47af2fb"). InnerVolumeSpecName "kube-api-access-wpd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.747454 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/120a892c-adc8-488d-91e7-3c76b47af2fb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.750261 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpd8c\" (UniqueName: \"kubernetes.io/projected/120a892c-adc8-488d-91e7-3c76b47af2fb-kube-api-access-wpd8c\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.750373 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/120a892c-adc8-488d-91e7-3c76b47af2fb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:02 crc kubenswrapper[4858]: I0127 20:15:02.777494 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrqvn"] Jan 27 20:15:02 crc kubenswrapper[4858]: W0127 20:15:02.790080 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce09890_416b_4b69_87f8_5a695f3c2ce8.slice/crio-ec44b81f0184e1c0b86a89997bbbd3e920462b695cbfae99dab2b830e1a519d9 WatchSource:0}: Error finding container ec44b81f0184e1c0b86a89997bbbd3e920462b695cbfae99dab2b830e1a519d9: Status 404 returned error can't find the container with id ec44b81f0184e1c0b86a89997bbbd3e920462b695cbfae99dab2b830e1a519d9 Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.389796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" event={"ID":"120a892c-adc8-488d-91e7-3c76b47af2fb","Type":"ContainerDied","Data":"88b7f7536a7eb938d8e94744abf1b0d4320fa44761da436ddd692715d7d34069"} Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.390121 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b7f7536a7eb938d8e94744abf1b0d4320fa44761da436ddd692715d7d34069" Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.389881 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj" Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.401891 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerID="9347acaae79258675cbb0e261775a83c1830f930839d84512b4f1da6935fde29" exitCode=0 Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.401978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqvn" event={"ID":"9ce09890-416b-4b69-87f8-5a695f3c2ce8","Type":"ContainerDied","Data":"9347acaae79258675cbb0e261775a83c1830f930839d84512b4f1da6935fde29"} Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.402012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqvn" event={"ID":"9ce09890-416b-4b69-87f8-5a695f3c2ce8","Type":"ContainerStarted","Data":"ec44b81f0184e1c0b86a89997bbbd3e920462b695cbfae99dab2b830e1a519d9"} Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.406156 4858 generic.go:334] "Generic (PLEG): container finished" podID="e59c3191-f721-45cc-b1d2-2e6bd8bc0797" containerID="69eff8baae6cd2c632484d3ee56333fe0091e53a8861c269d75bf07cbde70638" exitCode=0 Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.406217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6md42" event={"ID":"e59c3191-f721-45cc-b1d2-2e6bd8bc0797","Type":"ContainerDied","Data":"69eff8baae6cd2c632484d3ee56333fe0091e53a8861c269d75bf07cbde70638"} Jan 27 20:15:03 crc kubenswrapper[4858]: I0127 20:15:03.406249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6md42" event={"ID":"e59c3191-f721-45cc-b1d2-2e6bd8bc0797","Type":"ContainerStarted","Data":"b30edbae93ac420631f1ecdd7fbde58216defeaa9cc104b56c55a7402739df79"} Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.247934 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jfrzg"] Jan 27 20:15:04 crc kubenswrapper[4858]: E0127 20:15:04.248234 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120a892c-adc8-488d-91e7-3c76b47af2fb" containerName="collect-profiles" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.248247 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="120a892c-adc8-488d-91e7-3c76b47af2fb" containerName="collect-profiles" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.248380 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="120a892c-adc8-488d-91e7-3c76b47af2fb" containerName="collect-profiles" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.249345 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.253373 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfrzg"] Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.274750 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.372864 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7c34e1e-7f03-4b35-8a78-38432c88a885-utilities\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.374493 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7c34e1e-7f03-4b35-8a78-38432c88a885-catalog-content\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.374749 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnzg\" (UniqueName: \"kubernetes.io/projected/b7c34e1e-7f03-4b35-8a78-38432c88a885-kube-api-access-ppnzg\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.412903 4858 generic.go:334] "Generic (PLEG): container finished" podID="e59c3191-f721-45cc-b1d2-2e6bd8bc0797" containerID="87dc66d6d6c198b86f530f4417831845089e6bd928d57d1245362c064f8952ec" exitCode=0 Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.412942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6md42" event={"ID":"e59c3191-f721-45cc-b1d2-2e6bd8bc0797","Type":"ContainerDied","Data":"87dc66d6d6c198b86f530f4417831845089e6bd928d57d1245362c064f8952ec"} Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.450025 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xgbcp"] Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.453199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.460758 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xgbcp"] Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.468755 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.476126 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnzg\" (UniqueName: \"kubernetes.io/projected/b7c34e1e-7f03-4b35-8a78-38432c88a885-kube-api-access-ppnzg\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.476492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7c34e1e-7f03-4b35-8a78-38432c88a885-utilities\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.476737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7c34e1e-7f03-4b35-8a78-38432c88a885-catalog-content\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.477049 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7c34e1e-7f03-4b35-8a78-38432c88a885-utilities\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.477276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7c34e1e-7f03-4b35-8a78-38432c88a885-catalog-content\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.498122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnzg\" (UniqueName: \"kubernetes.io/projected/b7c34e1e-7f03-4b35-8a78-38432c88a885-kube-api-access-ppnzg\") pod \"redhat-operators-jfrzg\" (UID: \"b7c34e1e-7f03-4b35-8a78-38432c88a885\") " pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.578042 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8439c-4c71-4ed1-b4db-462915af0785-catalog-content\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.578151 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8439c-4c71-4ed1-b4db-462915af0785-utilities\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.579151 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hst8g\" (UniqueName: \"kubernetes.io/projected/43f8439c-4c71-4ed1-b4db-462915af0785-kube-api-access-hst8g\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.680479 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8439c-4c71-4ed1-b4db-462915af0785-catalog-content\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.680536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8439c-4c71-4ed1-b4db-462915af0785-utilities\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.680590 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hst8g\" (UniqueName: \"kubernetes.io/projected/43f8439c-4c71-4ed1-b4db-462915af0785-kube-api-access-hst8g\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.681336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43f8439c-4c71-4ed1-b4db-462915af0785-utilities\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.681467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43f8439c-4c71-4ed1-b4db-462915af0785-catalog-content\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.700411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hst8g\" (UniqueName: \"kubernetes.io/projected/43f8439c-4c71-4ed1-b4db-462915af0785-kube-api-access-hst8g\") pod \"community-operators-xgbcp\" (UID: \"43f8439c-4c71-4ed1-b4db-462915af0785\") " pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.703656 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.790533 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:04 crc kubenswrapper[4858]: I0127 20:15:04.986961 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xgbcp"] Jan 27 20:15:04 crc kubenswrapper[4858]: W0127 20:15:04.992797 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43f8439c_4c71_4ed1_b4db_462915af0785.slice/crio-148730f815c05c7831502d7d21c518a614b4196ccbae9006c00c25061ff4de94 WatchSource:0}: Error finding container 148730f815c05c7831502d7d21c518a614b4196ccbae9006c00c25061ff4de94: Status 404 returned error can't find the container with id 148730f815c05c7831502d7d21c518a614b4196ccbae9006c00c25061ff4de94 Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.106914 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfrzg"] Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.420876 4858 generic.go:334] "Generic (PLEG): container finished" podID="b7c34e1e-7f03-4b35-8a78-38432c88a885" containerID="78fccc68f6a35c82a8a811281838d5f0d3a62df1597c348c6918b0662cbc2669" exitCode=0 Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.420989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfrzg" event={"ID":"b7c34e1e-7f03-4b35-8a78-38432c88a885","Type":"ContainerDied","Data":"78fccc68f6a35c82a8a811281838d5f0d3a62df1597c348c6918b0662cbc2669"} Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.421318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfrzg" event={"ID":"b7c34e1e-7f03-4b35-8a78-38432c88a885","Type":"ContainerStarted","Data":"d6d4b5586bf111020071f22c22d40855c8bf66a9fa6a17e8794d59cb4417490c"} Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.424832 4858 generic.go:334] "Generic (PLEG): container finished" podID="43f8439c-4c71-4ed1-b4db-462915af0785" containerID="cf431c23646fdda4bbdfd22ee6a6449db9174b087bd5747e67ed516d8c7cd23c" exitCode=0 Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.424919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgbcp" event={"ID":"43f8439c-4c71-4ed1-b4db-462915af0785","Type":"ContainerDied","Data":"cf431c23646fdda4bbdfd22ee6a6449db9174b087bd5747e67ed516d8c7cd23c"} Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.424948 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgbcp" event={"ID":"43f8439c-4c71-4ed1-b4db-462915af0785","Type":"ContainerStarted","Data":"148730f815c05c7831502d7d21c518a614b4196ccbae9006c00c25061ff4de94"} Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.427940 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerID="ac41c51f7eb2afdf600c30cdcc00b2a2f5e47988dbfa489b524547341607806c" exitCode=0 Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.428032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqvn" event={"ID":"9ce09890-416b-4b69-87f8-5a695f3c2ce8","Type":"ContainerDied","Data":"ac41c51f7eb2afdf600c30cdcc00b2a2f5e47988dbfa489b524547341607806c"} Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.430595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6md42" event={"ID":"e59c3191-f721-45cc-b1d2-2e6bd8bc0797","Type":"ContainerStarted","Data":"deae84786169f6c07c65743393116169e78e106c7db713aa6cd8c2e1cd550117"} Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.505994 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6md42" podStartSLOduration=2.988789234 podStartE2EDuration="4.50596537s" podCreationTimestamp="2026-01-27 20:15:01 +0000 UTC" firstStartedPulling="2026-01-27 20:15:03.409676096 +0000 UTC m=+448.117491812" lastFinishedPulling="2026-01-27 20:15:04.926852242 +0000 UTC m=+449.634667948" observedRunningTime="2026-01-27 20:15:05.502248169 +0000 UTC m=+450.210063875" watchObservedRunningTime="2026-01-27 20:15:05.50596537 +0000 UTC m=+450.213781076" Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.924132 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xt9cz" Jan 27 20:15:05 crc kubenswrapper[4858]: I0127 20:15:05.978027 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8tr47"] Jan 27 20:15:06 crc kubenswrapper[4858]: I0127 20:15:06.441811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqvn" event={"ID":"9ce09890-416b-4b69-87f8-5a695f3c2ce8","Type":"ContainerStarted","Data":"e289bd719d90e0709a58b8df53e7b69d10b4e12a91d449ee80c419a3b5a63aa1"} Jan 27 20:15:06 crc kubenswrapper[4858]: I0127 20:15:06.466290 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jrqvn" podStartSLOduration=1.815166406 podStartE2EDuration="4.466274421s" podCreationTimestamp="2026-01-27 20:15:02 +0000 UTC" firstStartedPulling="2026-01-27 20:15:03.404906624 +0000 UTC m=+448.112722330" lastFinishedPulling="2026-01-27 20:15:06.056014639 +0000 UTC m=+450.763830345" observedRunningTime="2026-01-27 20:15:06.46489056 +0000 UTC m=+451.172706286" watchObservedRunningTime="2026-01-27 20:15:06.466274421 +0000 UTC m=+451.174090127" Jan 27 20:15:07 crc kubenswrapper[4858]: I0127 20:15:07.458330 4858 generic.go:334] "Generic (PLEG): container finished" podID="b7c34e1e-7f03-4b35-8a78-38432c88a885" containerID="6fa7497bb06f756c71152ed3aecd962acaf777af239424b8f7b6ba724c78aeae" exitCode=0 Jan 27 20:15:07 crc kubenswrapper[4858]: I0127 20:15:07.458645 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfrzg" event={"ID":"b7c34e1e-7f03-4b35-8a78-38432c88a885","Type":"ContainerDied","Data":"6fa7497bb06f756c71152ed3aecd962acaf777af239424b8f7b6ba724c78aeae"} Jan 27 20:15:08 crc kubenswrapper[4858]: I0127 20:15:08.467489 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfrzg" event={"ID":"b7c34e1e-7f03-4b35-8a78-38432c88a885","Type":"ContainerStarted","Data":"bc3393dc3a685ad5a04f1e6609b4c877e22dceb07a4a414ddcc36e1567907a2e"} Jan 27 20:15:08 crc kubenswrapper[4858]: I0127 20:15:08.489817 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jfrzg" podStartSLOduration=2.037636661 podStartE2EDuration="4.489793911s" podCreationTimestamp="2026-01-27 20:15:04 +0000 UTC" firstStartedPulling="2026-01-27 20:15:05.422067853 +0000 UTC m=+450.129883559" lastFinishedPulling="2026-01-27 20:15:07.874225103 +0000 UTC m=+452.582040809" observedRunningTime="2026-01-27 20:15:08.484929645 +0000 UTC m=+453.192745391" watchObservedRunningTime="2026-01-27 20:15:08.489793911 +0000 UTC m=+453.197609617" Jan 27 20:15:10 crc kubenswrapper[4858]: I0127 20:15:10.479871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgbcp" event={"ID":"43f8439c-4c71-4ed1-b4db-462915af0785","Type":"ContainerStarted","Data":"5451091426a5251f55dbbefd3bdc0afb5bbf5349b145283afbf2813e382ebf5a"} Jan 27 20:15:11 crc kubenswrapper[4858]: I0127 20:15:11.486762 4858 generic.go:334] "Generic (PLEG): container finished" podID="43f8439c-4c71-4ed1-b4db-462915af0785" containerID="5451091426a5251f55dbbefd3bdc0afb5bbf5349b145283afbf2813e382ebf5a" exitCode=0 Jan 27 20:15:11 crc kubenswrapper[4858]: I0127 20:15:11.487462 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgbcp" event={"ID":"43f8439c-4c71-4ed1-b4db-462915af0785","Type":"ContainerDied","Data":"5451091426a5251f55dbbefd3bdc0afb5bbf5349b145283afbf2813e382ebf5a"} Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.158250 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.158512 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.204530 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.357773 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.358675 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.405527 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.495812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xgbcp" event={"ID":"43f8439c-4c71-4ed1-b4db-462915af0785","Type":"ContainerStarted","Data":"a6acbf48c8e4256723c27055d81c5e6ec8b396777d1b2ee5b4c54cacb758a8b9"} Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.523049 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xgbcp" podStartSLOduration=2.059342054 podStartE2EDuration="8.523028099s" podCreationTimestamp="2026-01-27 20:15:04 +0000 UTC" firstStartedPulling="2026-01-27 20:15:05.42599058 +0000 UTC m=+450.133806286" lastFinishedPulling="2026-01-27 20:15:11.889676625 +0000 UTC m=+456.597492331" observedRunningTime="2026-01-27 20:15:12.52171599 +0000 UTC m=+457.229531706" watchObservedRunningTime="2026-01-27 20:15:12.523028099 +0000 UTC m=+457.230843805" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.538394 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:15:12 crc kubenswrapper[4858]: I0127 20:15:12.540959 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6md42" Jan 27 20:15:14 crc kubenswrapper[4858]: I0127 20:15:14.704334 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:14 crc kubenswrapper[4858]: I0127 20:15:14.704399 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:14 crc kubenswrapper[4858]: I0127 20:15:14.745333 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:14 crc kubenswrapper[4858]: I0127 20:15:14.790927 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:14 crc kubenswrapper[4858]: I0127 20:15:14.790998 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:14 crc kubenswrapper[4858]: I0127 20:15:14.836587 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:15 crc kubenswrapper[4858]: I0127 20:15:15.549772 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jfrzg" Jan 27 20:15:24 crc kubenswrapper[4858]: I0127 20:15:24.834142 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xgbcp" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.016184 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" podUID="631986f5-1f28-45ac-8390-c3ac0f3920c0" containerName="registry" containerID="cri-o://5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03" gracePeriod=30 Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.374017 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbjvd\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-kube-api-access-qbjvd\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/631986f5-1f28-45ac-8390-c3ac0f3920c0-installation-pull-secrets\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448421 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/631986f5-1f28-45ac-8390-c3ac0f3920c0-ca-trust-extracted\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-bound-sa-token\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448489 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-trusted-ca\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448512 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-tls\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.448783 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-certificates\") pod \"631986f5-1f28-45ac-8390-c3ac0f3920c0\" (UID: \"631986f5-1f28-45ac-8390-c3ac0f3920c0\") " Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.449466 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.449593 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.453944 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/631986f5-1f28-45ac-8390-c3ac0f3920c0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.454842 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-kube-api-access-qbjvd" (OuterVolumeSpecName: "kube-api-access-qbjvd") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "kube-api-access-qbjvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.455529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.456251 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.462091 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.467672 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631986f5-1f28-45ac-8390-c3ac0f3920c0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "631986f5-1f28-45ac-8390-c3ac0f3920c0" (UID: "631986f5-1f28-45ac-8390-c3ac0f3920c0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.550352 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.550397 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbjvd\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-kube-api-access-qbjvd\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.550411 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/631986f5-1f28-45ac-8390-c3ac0f3920c0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.550423 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/631986f5-1f28-45ac-8390-c3ac0f3920c0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.550434 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.550445 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/631986f5-1f28-45ac-8390-c3ac0f3920c0-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.550455 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/631986f5-1f28-45ac-8390-c3ac0f3920c0-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.592756 4858 generic.go:334] "Generic (PLEG): container finished" podID="631986f5-1f28-45ac-8390-c3ac0f3920c0" containerID="5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03" exitCode=0 Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.592800 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" event={"ID":"631986f5-1f28-45ac-8390-c3ac0f3920c0","Type":"ContainerDied","Data":"5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03"} Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.592828 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" event={"ID":"631986f5-1f28-45ac-8390-c3ac0f3920c0","Type":"ContainerDied","Data":"bbe467fdf212450910655a0703c1612d811ee7bfe5deff20b474de7d4e6ae440"} Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.592846 4858 scope.go:117] "RemoveContainer" containerID="5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.592843 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8tr47" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.613294 4858 scope.go:117] "RemoveContainer" containerID="5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03" Jan 27 20:15:31 crc kubenswrapper[4858]: E0127 20:15:31.616704 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03\": container with ID starting with 5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03 not found: ID does not exist" containerID="5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.616750 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03"} err="failed to get container status \"5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03\": rpc error: code = NotFound desc = could not find container \"5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03\": container with ID starting with 5170e3b3de596bfe04acc220aa30ea37732c2ab06a93b2eead4fd47108a5cf03 not found: ID does not exist" Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.623710 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8tr47"] Jan 27 20:15:31 crc kubenswrapper[4858]: I0127 20:15:31.628108 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8tr47"] Jan 27 20:15:32 crc kubenswrapper[4858]: I0127 20:15:32.079142 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631986f5-1f28-45ac-8390-c3ac0f3920c0" path="/var/lib/kubelet/pods/631986f5-1f28-45ac-8390-c3ac0f3920c0/volumes" Jan 27 20:16:29 crc kubenswrapper[4858]: I0127 20:16:29.328860 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:16:29 crc kubenswrapper[4858]: I0127 20:16:29.329651 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:16:59 crc kubenswrapper[4858]: I0127 20:16:59.329302 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:16:59 crc kubenswrapper[4858]: I0127 20:16:59.330444 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:17:29 crc kubenswrapper[4858]: I0127 20:17:29.328945 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:17:29 crc kubenswrapper[4858]: I0127 20:17:29.329712 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:17:29 crc kubenswrapper[4858]: I0127 20:17:29.329786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:17:29 crc kubenswrapper[4858]: I0127 20:17:29.330747 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97a523bb07aac1f1bb0ad85b2296648a7841f219815ab2eac986bfc2fc387de8"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:17:29 crc kubenswrapper[4858]: I0127 20:17:29.330847 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://97a523bb07aac1f1bb0ad85b2296648a7841f219815ab2eac986bfc2fc387de8" gracePeriod=600 Jan 27 20:17:30 crc kubenswrapper[4858]: I0127 20:17:30.296216 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="97a523bb07aac1f1bb0ad85b2296648a7841f219815ab2eac986bfc2fc387de8" exitCode=0 Jan 27 20:17:30 crc kubenswrapper[4858]: I0127 20:17:30.296328 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"97a523bb07aac1f1bb0ad85b2296648a7841f219815ab2eac986bfc2fc387de8"} Jan 27 20:17:30 crc kubenswrapper[4858]: I0127 20:17:30.297032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"86373b100213bc4355e865928b2a5437a78a9df277502a47946cf0f5767d7dde"} Jan 27 20:17:30 crc kubenswrapper[4858]: I0127 20:17:30.297065 4858 scope.go:117] "RemoveContainer" containerID="f523d2a034fb7aa3deeabfd7fe2846140bad94ae6e8919a72e4a06a8629bcf50" Jan 27 20:19:29 crc kubenswrapper[4858]: I0127 20:19:29.328495 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:19:29 crc kubenswrapper[4858]: I0127 20:19:29.329760 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:19:59 crc kubenswrapper[4858]: I0127 20:19:59.328923 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:19:59 crc kubenswrapper[4858]: I0127 20:19:59.329590 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:20:29 crc kubenswrapper[4858]: I0127 20:20:29.329986 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:20:29 crc kubenswrapper[4858]: I0127 20:20:29.331108 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:20:29 crc kubenswrapper[4858]: I0127 20:20:29.331230 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:20:29 crc kubenswrapper[4858]: I0127 20:20:29.332722 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86373b100213bc4355e865928b2a5437a78a9df277502a47946cf0f5767d7dde"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:20:29 crc kubenswrapper[4858]: I0127 20:20:29.332852 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://86373b100213bc4355e865928b2a5437a78a9df277502a47946cf0f5767d7dde" gracePeriod=600 Jan 27 20:20:30 crc kubenswrapper[4858]: I0127 20:20:30.475053 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="86373b100213bc4355e865928b2a5437a78a9df277502a47946cf0f5767d7dde" exitCode=0 Jan 27 20:20:30 crc kubenswrapper[4858]: I0127 20:20:30.475117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"86373b100213bc4355e865928b2a5437a78a9df277502a47946cf0f5767d7dde"} Jan 27 20:20:30 crc kubenswrapper[4858]: I0127 20:20:30.475404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"134de6cefdf9618660f3288534217e176eacedd779a7557a8425c203f6c864ec"} Jan 27 20:20:30 crc kubenswrapper[4858]: I0127 20:20:30.475440 4858 scope.go:117] "RemoveContainer" containerID="97a523bb07aac1f1bb0ad85b2296648a7841f219815ab2eac986bfc2fc387de8" Jan 27 20:20:32 crc kubenswrapper[4858]: I0127 20:20:32.079525 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.258070 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x9cph"] Jan 27 20:20:39 crc kubenswrapper[4858]: E0127 20:20:39.258988 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631986f5-1f28-45ac-8390-c3ac0f3920c0" containerName="registry" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.259007 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="631986f5-1f28-45ac-8390-c3ac0f3920c0" containerName="registry" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.259165 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="631986f5-1f28-45ac-8390-c3ac0f3920c0" containerName="registry" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.259653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.263242 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.263606 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.267740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bpx\" (UniqueName: \"kubernetes.io/projected/92b94f6b-96ed-4ee3-96e6-8d1c22358773-kube-api-access-z2bpx\") pod \"cert-manager-cainjector-cf98fcc89-x9cph\" (UID: \"92b94f6b-96ed-4ee3-96e6-8d1c22358773\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.268274 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xkph9" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.272627 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-n8kqf"] Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.275826 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-n8kqf" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.278680 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9kf8r" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.291885 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x9cph"] Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.300666 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-n8kqf"] Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.305895 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-86ftq"] Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.329836 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.332624 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-g8njp" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.349571 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-86ftq"] Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.369092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bpx\" (UniqueName: \"kubernetes.io/projected/92b94f6b-96ed-4ee3-96e6-8d1c22358773-kube-api-access-z2bpx\") pod \"cert-manager-cainjector-cf98fcc89-x9cph\" (UID: \"92b94f6b-96ed-4ee3-96e6-8d1c22358773\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.369147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v426c\" (UniqueName: \"kubernetes.io/projected/7d2f237c-d08c-479b-a3e3-7ef983dc2c41-kube-api-access-v426c\") pod \"cert-manager-webhook-687f57d79b-86ftq\" (UID: \"7d2f237c-d08c-479b-a3e3-7ef983dc2c41\") " pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.369171 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn2tb\" (UniqueName: \"kubernetes.io/projected/f425f50a-9405-4c04-b320-22524d815b8a-kube-api-access-sn2tb\") pod \"cert-manager-858654f9db-n8kqf\" (UID: \"f425f50a-9405-4c04-b320-22524d815b8a\") " pod="cert-manager/cert-manager-858654f9db-n8kqf" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.400685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bpx\" (UniqueName: \"kubernetes.io/projected/92b94f6b-96ed-4ee3-96e6-8d1c22358773-kube-api-access-z2bpx\") pod \"cert-manager-cainjector-cf98fcc89-x9cph\" (UID: \"92b94f6b-96ed-4ee3-96e6-8d1c22358773\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.470497 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v426c\" (UniqueName: \"kubernetes.io/projected/7d2f237c-d08c-479b-a3e3-7ef983dc2c41-kube-api-access-v426c\") pod \"cert-manager-webhook-687f57d79b-86ftq\" (UID: \"7d2f237c-d08c-479b-a3e3-7ef983dc2c41\") " pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.470578 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn2tb\" (UniqueName: \"kubernetes.io/projected/f425f50a-9405-4c04-b320-22524d815b8a-kube-api-access-sn2tb\") pod \"cert-manager-858654f9db-n8kqf\" (UID: \"f425f50a-9405-4c04-b320-22524d815b8a\") " pod="cert-manager/cert-manager-858654f9db-n8kqf" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.491180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v426c\" (UniqueName: \"kubernetes.io/projected/7d2f237c-d08c-479b-a3e3-7ef983dc2c41-kube-api-access-v426c\") pod \"cert-manager-webhook-687f57d79b-86ftq\" (UID: \"7d2f237c-d08c-479b-a3e3-7ef983dc2c41\") " pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.491477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn2tb\" (UniqueName: \"kubernetes.io/projected/f425f50a-9405-4c04-b320-22524d815b8a-kube-api-access-sn2tb\") pod \"cert-manager-858654f9db-n8kqf\" (UID: \"f425f50a-9405-4c04-b320-22524d815b8a\") " pod="cert-manager/cert-manager-858654f9db-n8kqf" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.583813 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.595219 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-n8kqf" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.650354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.825233 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-n8kqf"] Jan 27 20:20:39 crc kubenswrapper[4858]: W0127 20:20:39.832905 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf425f50a_9405_4c04_b320_22524d815b8a.slice/crio-278af29cf9538a9df92eb6fbd3cb2697bcf6ff907e0a585a4b32069ba48c2b0a WatchSource:0}: Error finding container 278af29cf9538a9df92eb6fbd3cb2697bcf6ff907e0a585a4b32069ba48c2b0a: Status 404 returned error can't find the container with id 278af29cf9538a9df92eb6fbd3cb2697bcf6ff907e0a585a4b32069ba48c2b0a Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.837562 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:20:39 crc kubenswrapper[4858]: I0127 20:20:39.919382 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-86ftq"] Jan 27 20:20:40 crc kubenswrapper[4858]: W0127 20:20:40.076402 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92b94f6b_96ed_4ee3_96e6_8d1c22358773.slice/crio-bae521bdea0c2cd45a229db49ee8d97a4a682c3109e557ffa14f18ea37354ff9 WatchSource:0}: Error finding container bae521bdea0c2cd45a229db49ee8d97a4a682c3109e557ffa14f18ea37354ff9: Status 404 returned error can't find the container with id bae521bdea0c2cd45a229db49ee8d97a4a682c3109e557ffa14f18ea37354ff9 Jan 27 20:20:40 crc kubenswrapper[4858]: I0127 20:20:40.083213 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x9cph"] Jan 27 20:20:40 crc kubenswrapper[4858]: I0127 20:20:40.543278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-n8kqf" event={"ID":"f425f50a-9405-4c04-b320-22524d815b8a","Type":"ContainerStarted","Data":"278af29cf9538a9df92eb6fbd3cb2697bcf6ff907e0a585a4b32069ba48c2b0a"} Jan 27 20:20:40 crc kubenswrapper[4858]: I0127 20:20:40.545434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" event={"ID":"92b94f6b-96ed-4ee3-96e6-8d1c22358773","Type":"ContainerStarted","Data":"bae521bdea0c2cd45a229db49ee8d97a4a682c3109e557ffa14f18ea37354ff9"} Jan 27 20:20:40 crc kubenswrapper[4858]: I0127 20:20:40.547455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" event={"ID":"7d2f237c-d08c-479b-a3e3-7ef983dc2c41","Type":"ContainerStarted","Data":"ba521eb4b427b8bb1cfce28cdcf3a161a8f6cdae8eaa121a22a3931bd35d4f77"} Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.606322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-n8kqf" event={"ID":"f425f50a-9405-4c04-b320-22524d815b8a","Type":"ContainerStarted","Data":"99c37531067448fecbb25f85b1003d96b2da6774e4a983a487b1b902a0afce9c"} Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.609336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" event={"ID":"92b94f6b-96ed-4ee3-96e6-8d1c22358773","Type":"ContainerStarted","Data":"7e7ab758dfb90892d1d86766b65f34c765c147626c22a490e2d333f2e8aeeabd"} Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.623584 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-n8kqf" podStartSLOduration=1.607591746 podStartE2EDuration="9.623526531s" podCreationTimestamp="2026-01-27 20:20:39 +0000 UTC" firstStartedPulling="2026-01-27 20:20:39.83729106 +0000 UTC m=+784.545106766" lastFinishedPulling="2026-01-27 20:20:47.853225825 +0000 UTC m=+792.561041551" observedRunningTime="2026-01-27 20:20:48.622692677 +0000 UTC m=+793.330508393" watchObservedRunningTime="2026-01-27 20:20:48.623526531 +0000 UTC m=+793.331342237" Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.642112 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x9cph" podStartSLOduration=1.9305428629999999 podStartE2EDuration="9.64209366s" podCreationTimestamp="2026-01-27 20:20:39 +0000 UTC" firstStartedPulling="2026-01-27 20:20:40.079654134 +0000 UTC m=+784.787469860" lastFinishedPulling="2026-01-27 20:20:47.791204951 +0000 UTC m=+792.499020657" observedRunningTime="2026-01-27 20:20:48.640187354 +0000 UTC m=+793.348003080" watchObservedRunningTime="2026-01-27 20:20:48.64209366 +0000 UTC m=+793.349909356" Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.673337 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rsk7j"] Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.673853 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-controller" containerID="cri-o://3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" gracePeriod=30 Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.673984 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="nbdb" containerID="cri-o://efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" gracePeriod=30 Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.674488 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="northd" containerID="cri-o://ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" gracePeriod=30 Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.674537 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" gracePeriod=30 Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.674605 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-node" containerID="cri-o://2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" gracePeriod=30 Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.674657 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-acl-logging" containerID="cri-o://bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" gracePeriod=30 Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.675093 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="sbdb" containerID="cri-o://721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" gracePeriod=30 Jan 27 20:20:48 crc kubenswrapper[4858]: I0127 20:20:48.730897 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" containerID="cri-o://41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" gracePeriod=30 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.070641 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/3.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.072543 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovn-acl-logging/0.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.073191 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovn-controller/0.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.073659 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127413 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-etc-openvswitch\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127460 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-log-socket\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-netns\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127537 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-bin\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-ovn\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-ovn-kubernetes\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127645 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-systemd\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127671 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-var-lib-openvswitch\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127668 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127688 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-kubelet\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127788 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-env-overrides\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127815 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p24pj\" (UniqueName: \"kubernetes.io/projected/5cda3ac1-7db7-4215-a301-b757743bff59-kube-api-access-p24pj\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127833 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-openvswitch\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127865 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-node-log\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127885 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-config\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127901 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-script-lib\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127945 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-netd\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.127977 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-systemd-units\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-slash\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cda3ac1-7db7-4215-a301-b757743bff59-ovn-node-metrics-cert\") pod \"5cda3ac1-7db7-4215-a301-b757743bff59\" (UID: \"5cda3ac1-7db7-4215-a301-b757743bff59\") " Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128434 4858 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128456 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128469 4858 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128659 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.128794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.129170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.129689 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-log-socket" (OuterVolumeSpecName: "log-socket") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.129725 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.129744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.130040 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.130202 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.130236 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-node-log" (OuterVolumeSpecName: "node-log") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.130337 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.130410 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-slash" (OuterVolumeSpecName: "host-slash") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.130619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.130962 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.138565 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cda3ac1-7db7-4215-a301-b757743bff59-kube-api-access-p24pj" (OuterVolumeSpecName: "kube-api-access-p24pj") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "kube-api-access-p24pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.144747 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cda3ac1-7db7-4215-a301-b757743bff59-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148059 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9c8sw"] Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148506 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="nbdb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148570 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="nbdb" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148585 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148596 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148612 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148624 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148637 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148648 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148658 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148669 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148681 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kubecfg-setup" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148691 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kubecfg-setup" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148711 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="sbdb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148722 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="sbdb" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148738 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-node" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148748 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-node" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148764 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148774 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148785 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="northd" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148795 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="northd" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.148807 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-acl-logging" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.148817 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-acl-logging" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149007 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-acl-logging" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149023 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="sbdb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149039 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="northd" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149054 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-node" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149070 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149084 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149097 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149110 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149125 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149139 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovn-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149154 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="nbdb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149169 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.149347 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149367 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.149387 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.149400 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" containerName="ovnkube-controller" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.151081 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5cda3ac1-7db7-4215-a301-b757743bff59" (UID: "5cda3ac1-7db7-4215-a301-b757743bff59"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.153497 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovn-node-metrics-cert\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-run-netns\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229672 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-etc-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-systemd-units\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-kubelet\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-ovn\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovnkube-config\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-run-ovn-kubernetes\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-slash\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229843 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovnkube-script-lib\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-var-lib-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229891 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-log-socket\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-env-overrides\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-systemd\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-cni-bin\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.229983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8c8m\" (UniqueName: \"kubernetes.io/projected/fe259694-7bc4-4311-a2a9-df0dab9ad484-kube-api-access-w8c8m\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230003 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-cni-netd\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-node-log\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230057 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230070 4858 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230082 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230093 4858 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230103 4858 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230113 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230124 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p24pj\" (UniqueName: \"kubernetes.io/projected/5cda3ac1-7db7-4215-a301-b757743bff59-kube-api-access-p24pj\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230134 4858 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230144 4858 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230226 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230388 4858 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230432 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cda3ac1-7db7-4215-a301-b757743bff59-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230601 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230619 4858 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230653 4858 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230667 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cda3ac1-7db7-4215-a301-b757743bff59-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.230712 4858 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cda3ac1-7db7-4215-a301-b757743bff59-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-run-ovn-kubernetes\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-run-ovn-kubernetes\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-slash\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-slash\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovnkube-script-lib\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-var-lib-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-log-socket\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332668 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-env-overrides\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332699 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-var-lib-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-systemd\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-cni-bin\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332785 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-systemd\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8c8m\" (UniqueName: \"kubernetes.io/projected/fe259694-7bc4-4311-a2a9-df0dab9ad484-kube-api-access-w8c8m\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-log-socket\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-cni-netd\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-cni-bin\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-node-log\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-node-log\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovn-node-metrics-cert\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-run-netns\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.332955 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-cni-netd\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-etc-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333033 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-run-netns\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-systemd-units\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-etc-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-openvswitch\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333118 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-kubelet\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-ovn\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-systemd-units\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovnkube-config\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333186 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-kubelet\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fe259694-7bc4-4311-a2a9-df0dab9ad484-run-ovn\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333486 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovnkube-script-lib\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.333914 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovnkube-config\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.334082 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fe259694-7bc4-4311-a2a9-df0dab9ad484-env-overrides\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.337579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fe259694-7bc4-4311-a2a9-df0dab9ad484-ovn-node-metrics-cert\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.351646 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8c8m\" (UniqueName: \"kubernetes.io/projected/fe259694-7bc4-4311-a2a9-df0dab9ad484-kube-api-access-w8c8m\") pod \"ovnkube-node-9c8sw\" (UID: \"fe259694-7bc4-4311-a2a9-df0dab9ad484\") " pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.486319 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:49 crc kubenswrapper[4858]: W0127 20:20:49.518469 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe259694_7bc4_4311_a2a9_df0dab9ad484.slice/crio-70243f14208e58de88d1e14e17bd724aaacd602f828dac3de851832ff69b0444 WatchSource:0}: Error finding container 70243f14208e58de88d1e14e17bd724aaacd602f828dac3de851832ff69b0444: Status 404 returned error can't find the container with id 70243f14208e58de88d1e14e17bd724aaacd602f828dac3de851832ff69b0444 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.615986 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/2.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.617237 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/1.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.617345 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fea6600-49c2-4130-a506-6046f0f7760d" containerID="7b84079c817a81c05a19043435704e8a5fda3cbe2f61372f38f3fe837f08fdf2" exitCode=2 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.617472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerDied","Data":"7b84079c817a81c05a19043435704e8a5fda3cbe2f61372f38f3fe837f08fdf2"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.617615 4858 scope.go:117] "RemoveContainer" containerID="57801dd9a207d6a59bdd79e9a8c06e2d2bce4e40905aa52aaf172b2c9430703f" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.618306 4858 scope.go:117] "RemoveContainer" containerID="7b84079c817a81c05a19043435704e8a5fda3cbe2f61372f38f3fe837f08fdf2" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.621647 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovnkube-controller/3.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.625219 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovn-acl-logging/0.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.625760 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rsk7j_5cda3ac1-7db7-4215-a301-b757743bff59/ovn-controller/0.log" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626278 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" exitCode=0 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626311 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" exitCode=0 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626321 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" exitCode=0 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626330 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" exitCode=0 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626338 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" exitCode=0 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626346 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" exitCode=0 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626354 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" exitCode=143 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626362 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cda3ac1-7db7-4215-a301-b757743bff59" containerID="3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" exitCode=143 Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626445 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626500 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626577 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626591 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626599 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626606 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626614 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626620 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626627 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626633 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626640 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626649 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626668 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626672 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626677 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626931 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626940 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626947 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626953 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626959 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626967 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626973 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626979 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.626990 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627002 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627011 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627017 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627023 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627029 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627034 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627039 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627045 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627051 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627056 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rsk7j" event={"ID":"5cda3ac1-7db7-4215-a301-b757743bff59","Type":"ContainerDied","Data":"07749768e0b8ef654ec84d90c12f1cc30c42b84087e485d48d2ba9bab3abf3a2"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627076 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627083 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627090 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627096 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627102 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627108 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627121 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627127 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627133 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.627139 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.628169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" event={"ID":"7d2f237c-d08c-479b-a3e3-7ef983dc2c41","Type":"ContainerStarted","Data":"a362665ce30f29f5496a673978fe69a21e9d486a9795a0376c718bb2c2f3a0ff"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.628762 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.630370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"70243f14208e58de88d1e14e17bd724aaacd602f828dac3de851832ff69b0444"} Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.657780 4858 scope.go:117] "RemoveContainer" containerID="41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.662805 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" podStartSLOduration=1.525226062 podStartE2EDuration="10.662591261s" podCreationTimestamp="2026-01-27 20:20:39 +0000 UTC" firstStartedPulling="2026-01-27 20:20:39.926090805 +0000 UTC m=+784.633906511" lastFinishedPulling="2026-01-27 20:20:49.063455994 +0000 UTC m=+793.771271710" observedRunningTime="2026-01-27 20:20:49.658295864 +0000 UTC m=+794.366111580" watchObservedRunningTime="2026-01-27 20:20:49.662591261 +0000 UTC m=+794.370406967" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.700624 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rsk7j"] Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.707163 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rsk7j"] Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.758233 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.777235 4858 scope.go:117] "RemoveContainer" containerID="721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.791030 4858 scope.go:117] "RemoveContainer" containerID="efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.804670 4858 scope.go:117] "RemoveContainer" containerID="ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.824141 4858 scope.go:117] "RemoveContainer" containerID="4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.849308 4858 scope.go:117] "RemoveContainer" containerID="2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.870683 4858 scope.go:117] "RemoveContainer" containerID="bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.886043 4858 scope.go:117] "RemoveContainer" containerID="3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.907373 4858 scope.go:117] "RemoveContainer" containerID="d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.933403 4858 scope.go:117] "RemoveContainer" containerID="41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.934112 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": container with ID starting with 41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6 not found: ID does not exist" containerID="41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.934151 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} err="failed to get container status \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": rpc error: code = NotFound desc = could not find container \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": container with ID starting with 41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.934201 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.934958 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": container with ID starting with a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879 not found: ID does not exist" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.934984 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} err="failed to get container status \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": rpc error: code = NotFound desc = could not find container \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": container with ID starting with a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.935001 4858 scope.go:117] "RemoveContainer" containerID="721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.935525 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": container with ID starting with 721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb not found: ID does not exist" containerID="721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.935617 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} err="failed to get container status \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": rpc error: code = NotFound desc = could not find container \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": container with ID starting with 721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.935641 4858 scope.go:117] "RemoveContainer" containerID="efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.937784 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": container with ID starting with efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f not found: ID does not exist" containerID="efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.937808 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} err="failed to get container status \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": rpc error: code = NotFound desc = could not find container \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": container with ID starting with efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.937823 4858 scope.go:117] "RemoveContainer" containerID="ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.938146 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": container with ID starting with ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9 not found: ID does not exist" containerID="ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.938165 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} err="failed to get container status \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": rpc error: code = NotFound desc = could not find container \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": container with ID starting with ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.938180 4858 scope.go:117] "RemoveContainer" containerID="4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.938535 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": container with ID starting with 4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5 not found: ID does not exist" containerID="4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.938599 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} err="failed to get container status \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": rpc error: code = NotFound desc = could not find container \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": container with ID starting with 4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.938616 4858 scope.go:117] "RemoveContainer" containerID="2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.939311 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": container with ID starting with 2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976 not found: ID does not exist" containerID="2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.939335 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} err="failed to get container status \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": rpc error: code = NotFound desc = could not find container \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": container with ID starting with 2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.939352 4858 scope.go:117] "RemoveContainer" containerID="bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.939856 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": container with ID starting with bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281 not found: ID does not exist" containerID="bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.939907 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} err="failed to get container status \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": rpc error: code = NotFound desc = could not find container \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": container with ID starting with bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.939947 4858 scope.go:117] "RemoveContainer" containerID="3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.940478 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": container with ID starting with 3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a not found: ID does not exist" containerID="3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.940512 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} err="failed to get container status \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": rpc error: code = NotFound desc = could not find container \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": container with ID starting with 3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.940531 4858 scope.go:117] "RemoveContainer" containerID="d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6" Jan 27 20:20:49 crc kubenswrapper[4858]: E0127 20:20:49.940829 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": container with ID starting with d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6 not found: ID does not exist" containerID="d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.940858 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} err="failed to get container status \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": rpc error: code = NotFound desc = could not find container \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": container with ID starting with d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.940877 4858 scope.go:117] "RemoveContainer" containerID="41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.941215 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} err="failed to get container status \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": rpc error: code = NotFound desc = could not find container \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": container with ID starting with 41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.941238 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.941587 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} err="failed to get container status \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": rpc error: code = NotFound desc = could not find container \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": container with ID starting with a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.941616 4858 scope.go:117] "RemoveContainer" containerID="721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.942037 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} err="failed to get container status \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": rpc error: code = NotFound desc = could not find container \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": container with ID starting with 721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.942059 4858 scope.go:117] "RemoveContainer" containerID="efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.942475 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} err="failed to get container status \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": rpc error: code = NotFound desc = could not find container \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": container with ID starting with efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.942515 4858 scope.go:117] "RemoveContainer" containerID="ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.943155 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} err="failed to get container status \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": rpc error: code = NotFound desc = could not find container \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": container with ID starting with ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.943181 4858 scope.go:117] "RemoveContainer" containerID="4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.943514 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} err="failed to get container status \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": rpc error: code = NotFound desc = could not find container \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": container with ID starting with 4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.943597 4858 scope.go:117] "RemoveContainer" containerID="2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.943901 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} err="failed to get container status \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": rpc error: code = NotFound desc = could not find container \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": container with ID starting with 2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.943920 4858 scope.go:117] "RemoveContainer" containerID="bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.944364 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} err="failed to get container status \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": rpc error: code = NotFound desc = could not find container \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": container with ID starting with bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.944389 4858 scope.go:117] "RemoveContainer" containerID="3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.944681 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} err="failed to get container status \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": rpc error: code = NotFound desc = could not find container \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": container with ID starting with 3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.944698 4858 scope.go:117] "RemoveContainer" containerID="d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.944896 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} err="failed to get container status \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": rpc error: code = NotFound desc = could not find container \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": container with ID starting with d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.944937 4858 scope.go:117] "RemoveContainer" containerID="41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945129 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} err="failed to get container status \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": rpc error: code = NotFound desc = could not find container \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": container with ID starting with 41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945146 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945287 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} err="failed to get container status \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": rpc error: code = NotFound desc = could not find container \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": container with ID starting with a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945308 4858 scope.go:117] "RemoveContainer" containerID="721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945465 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} err="failed to get container status \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": rpc error: code = NotFound desc = could not find container \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": container with ID starting with 721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945507 4858 scope.go:117] "RemoveContainer" containerID="efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945749 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} err="failed to get container status \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": rpc error: code = NotFound desc = could not find container \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": container with ID starting with efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.945783 4858 scope.go:117] "RemoveContainer" containerID="ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946236 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} err="failed to get container status \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": rpc error: code = NotFound desc = could not find container \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": container with ID starting with ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946264 4858 scope.go:117] "RemoveContainer" containerID="4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946489 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} err="failed to get container status \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": rpc error: code = NotFound desc = could not find container \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": container with ID starting with 4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946529 4858 scope.go:117] "RemoveContainer" containerID="2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946749 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} err="failed to get container status \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": rpc error: code = NotFound desc = could not find container \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": container with ID starting with 2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946768 4858 scope.go:117] "RemoveContainer" containerID="bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946957 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} err="failed to get container status \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": rpc error: code = NotFound desc = could not find container \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": container with ID starting with bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.946984 4858 scope.go:117] "RemoveContainer" containerID="3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947171 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} err="failed to get container status \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": rpc error: code = NotFound desc = could not find container \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": container with ID starting with 3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947196 4858 scope.go:117] "RemoveContainer" containerID="d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947409 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} err="failed to get container status \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": rpc error: code = NotFound desc = could not find container \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": container with ID starting with d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947429 4858 scope.go:117] "RemoveContainer" containerID="41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947673 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6"} err="failed to get container status \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": rpc error: code = NotFound desc = could not find container \"41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6\": container with ID starting with 41f829a65cf7885cdce86c69c561387ebe7a252e11e1ca94f06c683114e211f6 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947706 4858 scope.go:117] "RemoveContainer" containerID="a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947936 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879"} err="failed to get container status \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": rpc error: code = NotFound desc = could not find container \"a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879\": container with ID starting with a6c66b9250a29b1445b9deb767877da7cd109a4c038f9ea6ef86cdbdd8269879 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.947959 4858 scope.go:117] "RemoveContainer" containerID="721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.948454 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb"} err="failed to get container status \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": rpc error: code = NotFound desc = could not find container \"721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb\": container with ID starting with 721236f58d5a8aaef12ba819a2895d24be944f97f8ced82e0d4ea2e363e92ccb not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.948501 4858 scope.go:117] "RemoveContainer" containerID="efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.948828 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f"} err="failed to get container status \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": rpc error: code = NotFound desc = could not find container \"efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f\": container with ID starting with efa40eab66d99070d21117b68ea6d038773298635ba233daf35a4c76df3b7a7f not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.948858 4858 scope.go:117] "RemoveContainer" containerID="ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.949193 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9"} err="failed to get container status \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": rpc error: code = NotFound desc = could not find container \"ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9\": container with ID starting with ddd60bf442f3503cde7ba981c345bfc2194d59bdbbe836b4085d4da1b0a5cfe9 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.949216 4858 scope.go:117] "RemoveContainer" containerID="4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.949517 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5"} err="failed to get container status \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": rpc error: code = NotFound desc = could not find container \"4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5\": container with ID starting with 4c8f011aac434683df5cb8c67d9854c55e96b57eb56c7fc9f8a66c9e6c3525b5 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.949539 4858 scope.go:117] "RemoveContainer" containerID="2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.949981 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976"} err="failed to get container status \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": rpc error: code = NotFound desc = could not find container \"2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976\": container with ID starting with 2357bc2b850d6cdbf18bf8dff7baac19b895fb1c30e7eb79ac0562c5a95fd976 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.950014 4858 scope.go:117] "RemoveContainer" containerID="bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.950408 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281"} err="failed to get container status \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": rpc error: code = NotFound desc = could not find container \"bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281\": container with ID starting with bdfc46e04cf239c8263ae7e2d885127440aac437d23866e19b6db3036ff81281 not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.950442 4858 scope.go:117] "RemoveContainer" containerID="3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.955090 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a"} err="failed to get container status \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": rpc error: code = NotFound desc = could not find container \"3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a\": container with ID starting with 3a54874a6e4659e0d304d918cfe43e2473c199ccd1d0c2f373ed58ed48df237a not found: ID does not exist" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.955164 4858 scope.go:117] "RemoveContainer" containerID="d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6" Jan 27 20:20:49 crc kubenswrapper[4858]: I0127 20:20:49.955647 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6"} err="failed to get container status \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": rpc error: code = NotFound desc = could not find container \"d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6\": container with ID starting with d3fb1de588ec95edfaefedacd7111891403719c1aac726f3baafd900ca5ea7a6 not found: ID does not exist" Jan 27 20:20:50 crc kubenswrapper[4858]: I0127 20:20:50.080629 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cda3ac1-7db7-4215-a301-b757743bff59" path="/var/lib/kubelet/pods/5cda3ac1-7db7-4215-a301-b757743bff59/volumes" Jan 27 20:20:50 crc kubenswrapper[4858]: I0127 20:20:50.636944 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-855m5_0fea6600-49c2-4130-a506-6046f0f7760d/kube-multus/2.log" Jan 27 20:20:50 crc kubenswrapper[4858]: I0127 20:20:50.637034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-855m5" event={"ID":"0fea6600-49c2-4130-a506-6046f0f7760d","Type":"ContainerStarted","Data":"6079fa6968d08f42b92368f8eca6320737a20bb89072cc439d4aa304a172ed72"} Jan 27 20:20:50 crc kubenswrapper[4858]: I0127 20:20:50.639605 4858 generic.go:334] "Generic (PLEG): container finished" podID="fe259694-7bc4-4311-a2a9-df0dab9ad484" containerID="b5c9ce457fd02eba04b5ae9035d37106bedb148c633313e421fb23b7d8334616" exitCode=0 Jan 27 20:20:50 crc kubenswrapper[4858]: I0127 20:20:50.639735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerDied","Data":"b5c9ce457fd02eba04b5ae9035d37106bedb148c633313e421fb23b7d8334616"} Jan 27 20:20:51 crc kubenswrapper[4858]: I0127 20:20:51.649025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"718b1c40e0aa717b44ec0394a860bacc6414d196b4d8027923dce856491a0ed8"} Jan 27 20:20:51 crc kubenswrapper[4858]: I0127 20:20:51.649756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"1a5caf759077ca4de93b8deb1d97792c7553dc21a8e6f6fa650370da5fb39b9b"} Jan 27 20:20:51 crc kubenswrapper[4858]: I0127 20:20:51.649804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"8b7ee36d6b610064461279fc51e81aaf8ee69191dbec3d023e8b8091bf5bb7ad"} Jan 27 20:20:51 crc kubenswrapper[4858]: I0127 20:20:51.649826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"6ed4e2c220214fd5863c2d59f38533cd4aba4ff8a95b40d2471fe64b544c0375"} Jan 27 20:20:52 crc kubenswrapper[4858]: I0127 20:20:52.673348 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"a0f1dccb10791fe3ae93c4f38744381b816c5dd55d6a840dc84bb87aecf3384a"} Jan 27 20:20:52 crc kubenswrapper[4858]: I0127 20:20:52.673986 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"05333f3ce46bcf09c2ed66fa3486b0eefe087bdc2219c9efec536283dd38aa30"} Jan 27 20:20:54 crc kubenswrapper[4858]: I0127 20:20:54.654062 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-86ftq" Jan 27 20:20:54 crc kubenswrapper[4858]: I0127 20:20:54.692135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"77932ba5ca5b9e688f4a5ca8f5f8c73bad7b336f9018c24ba1209ac9b34a315b"} Jan 27 20:20:58 crc kubenswrapper[4858]: I0127 20:20:58.718394 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" event={"ID":"fe259694-7bc4-4311-a2a9-df0dab9ad484","Type":"ContainerStarted","Data":"0fa4e5fbb485462b751b4f77433ad7afa8d3843769ad9f4ba5cab51ab4db2214"} Jan 27 20:20:58 crc kubenswrapper[4858]: I0127 20:20:58.721687 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:58 crc kubenswrapper[4858]: I0127 20:20:58.721711 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:58 crc kubenswrapper[4858]: I0127 20:20:58.721721 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:58 crc kubenswrapper[4858]: I0127 20:20:58.748691 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:58 crc kubenswrapper[4858]: I0127 20:20:58.751642 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:20:58 crc kubenswrapper[4858]: I0127 20:20:58.760281 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" podStartSLOduration=9.760262376 podStartE2EDuration="9.760262376s" podCreationTimestamp="2026-01-27 20:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:20:58.75565724 +0000 UTC m=+803.463472956" watchObservedRunningTime="2026-01-27 20:20:58.760262376 +0000 UTC m=+803.468078082" Jan 27 20:21:19 crc kubenswrapper[4858]: I0127 20:21:19.514302 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9c8sw" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.657962 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt"] Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.660124 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.663954 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.671088 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt"] Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.751766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdtzh\" (UniqueName: \"kubernetes.io/projected/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-kube-api-access-rdtzh\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.751837 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.751881 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.853698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.853815 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.853884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdtzh\" (UniqueName: \"kubernetes.io/projected/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-kube-api-access-rdtzh\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.854348 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.854363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.880046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdtzh\" (UniqueName: \"kubernetes.io/projected/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-kube-api-access-rdtzh\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:30 crc kubenswrapper[4858]: I0127 20:21:30.982882 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:31 crc kubenswrapper[4858]: I0127 20:21:31.228449 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt"] Jan 27 20:21:31 crc kubenswrapper[4858]: I0127 20:21:31.943603 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerID="2d23297a3dd35e59e0042cb21e2bcaa96662c398f1a821fbfc3bffb0c8f73af0" exitCode=0 Jan 27 20:21:31 crc kubenswrapper[4858]: I0127 20:21:31.943664 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" event={"ID":"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627","Type":"ContainerDied","Data":"2d23297a3dd35e59e0042cb21e2bcaa96662c398f1a821fbfc3bffb0c8f73af0"} Jan 27 20:21:31 crc kubenswrapper[4858]: I0127 20:21:31.945593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" event={"ID":"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627","Type":"ContainerStarted","Data":"55365dc9c7ed1db0edbc43d7ec421cbe42be013be128f784de9891f18b3248f7"} Jan 27 20:21:32 crc kubenswrapper[4858]: I0127 20:21:32.990792 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4j5t"] Jan 27 20:21:32 crc kubenswrapper[4858]: I0127 20:21:32.992080 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.006228 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4j5t"] Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.085579 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhrk\" (UniqueName: \"kubernetes.io/projected/7c884e31-2c47-4fa0-9431-f7c66bf30aab-kube-api-access-wkhrk\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.085645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-catalog-content\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.085718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-utilities\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.186957 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-utilities\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.187143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhrk\" (UniqueName: \"kubernetes.io/projected/7c884e31-2c47-4fa0-9431-f7c66bf30aab-kube-api-access-wkhrk\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.187197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-catalog-content\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.187469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-utilities\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.187735 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-catalog-content\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.213524 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhrk\" (UniqueName: \"kubernetes.io/projected/7c884e31-2c47-4fa0-9431-f7c66bf30aab-kube-api-access-wkhrk\") pod \"redhat-operators-t4j5t\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.309283 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.570258 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4j5t"] Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.956104 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerID="9d6e2ba3e2947f132fd95619770d8057ea25bb57c11dab28fb97bdb3a44c43bc" exitCode=0 Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.956158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4j5t" event={"ID":"7c884e31-2c47-4fa0-9431-f7c66bf30aab","Type":"ContainerDied","Data":"9d6e2ba3e2947f132fd95619770d8057ea25bb57c11dab28fb97bdb3a44c43bc"} Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.956203 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4j5t" event={"ID":"7c884e31-2c47-4fa0-9431-f7c66bf30aab","Type":"ContainerStarted","Data":"e4e63bb457ca17c4c79a859c7e4245cf81ecd4f8a53a56288f752beb5da04b27"} Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.958336 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerID="6c7d9ed4760fd64d86f3d31c72fdaf3d9fb5e53d9f1808688defcfb509f19e53" exitCode=0 Jan 27 20:21:33 crc kubenswrapper[4858]: I0127 20:21:33.958367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" event={"ID":"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627","Type":"ContainerDied","Data":"6c7d9ed4760fd64d86f3d31c72fdaf3d9fb5e53d9f1808688defcfb509f19e53"} Jan 27 20:21:34 crc kubenswrapper[4858]: I0127 20:21:34.967000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4j5t" event={"ID":"7c884e31-2c47-4fa0-9431-f7c66bf30aab","Type":"ContainerStarted","Data":"2fedaf98e374daf238ef650d27ade6cde3890e6c2f3e040c59d5abfd88dfaaff"} Jan 27 20:21:34 crc kubenswrapper[4858]: I0127 20:21:34.969171 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerID="aef6d17b212376097d9010083a9d27c0632d5819fd862e03e7369a6534999536" exitCode=0 Jan 27 20:21:34 crc kubenswrapper[4858]: I0127 20:21:34.969222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" event={"ID":"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627","Type":"ContainerDied","Data":"aef6d17b212376097d9010083a9d27c0632d5819fd862e03e7369a6534999536"} Jan 27 20:21:35 crc kubenswrapper[4858]: I0127 20:21:35.978431 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerID="2fedaf98e374daf238ef650d27ade6cde3890e6c2f3e040c59d5abfd88dfaaff" exitCode=0 Jan 27 20:21:35 crc kubenswrapper[4858]: I0127 20:21:35.978568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4j5t" event={"ID":"7c884e31-2c47-4fa0-9431-f7c66bf30aab","Type":"ContainerDied","Data":"2fedaf98e374daf238ef650d27ade6cde3890e6c2f3e040c59d5abfd88dfaaff"} Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.290872 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.326889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-util\") pod \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.326998 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdtzh\" (UniqueName: \"kubernetes.io/projected/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-kube-api-access-rdtzh\") pod \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.327062 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-bundle\") pod \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\" (UID: \"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627\") " Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.329699 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-bundle" (OuterVolumeSpecName: "bundle") pod "2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" (UID: "2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.335335 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-kube-api-access-rdtzh" (OuterVolumeSpecName: "kube-api-access-rdtzh") pod "2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" (UID: "2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627"). InnerVolumeSpecName "kube-api-access-rdtzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.340248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-util" (OuterVolumeSpecName: "util") pod "2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" (UID: "2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.427749 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-util\") on node \"crc\" DevicePath \"\"" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.427782 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdtzh\" (UniqueName: \"kubernetes.io/projected/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-kube-api-access-rdtzh\") on node \"crc\" DevicePath \"\"" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.427795 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.985909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" event={"ID":"2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627","Type":"ContainerDied","Data":"55365dc9c7ed1db0edbc43d7ec421cbe42be013be128f784de9891f18b3248f7"} Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.986196 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55365dc9c7ed1db0edbc43d7ec421cbe42be013be128f784de9891f18b3248f7" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.986119 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt" Jan 27 20:21:36 crc kubenswrapper[4858]: I0127 20:21:36.988376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4j5t" event={"ID":"7c884e31-2c47-4fa0-9431-f7c66bf30aab","Type":"ContainerStarted","Data":"e0a96eee9e21419f1566d6b5d958796cfeeaa6c7dda653a89e701fe94de7bbd9"} Jan 27 20:21:37 crc kubenswrapper[4858]: I0127 20:21:37.006665 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4j5t" podStartSLOduration=2.605144011 podStartE2EDuration="5.006648356s" podCreationTimestamp="2026-01-27 20:21:32 +0000 UTC" firstStartedPulling="2026-01-27 20:21:33.957887496 +0000 UTC m=+838.665703202" lastFinishedPulling="2026-01-27 20:21:36.359391841 +0000 UTC m=+841.067207547" observedRunningTime="2026-01-27 20:21:37.006061209 +0000 UTC m=+841.713876925" watchObservedRunningTime="2026-01-27 20:21:37.006648356 +0000 UTC m=+841.714464062" Jan 27 20:21:43 crc kubenswrapper[4858]: I0127 20:21:43.314839 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:43 crc kubenswrapper[4858]: I0127 20:21:43.316173 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:44 crc kubenswrapper[4858]: I0127 20:21:44.368776 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t4j5t" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="registry-server" probeResult="failure" output=< Jan 27 20:21:44 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:21:44 crc kubenswrapper[4858]: > Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.240937 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk"] Jan 27 20:21:45 crc kubenswrapper[4858]: E0127 20:21:45.241723 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerName="extract" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.241749 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerName="extract" Jan 27 20:21:45 crc kubenswrapper[4858]: E0127 20:21:45.241774 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerName="util" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.241781 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerName="util" Jan 27 20:21:45 crc kubenswrapper[4858]: E0127 20:21:45.241795 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerName="pull" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.241802 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerName="pull" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.241921 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627" containerName="extract" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.242412 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.244516 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.245330 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.245560 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ll2bf" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.260840 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.350327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd78v\" (UniqueName: \"kubernetes.io/projected/35e8e577-768b-425e-ae5e-74f9f4710566-kube-api-access-vd78v\") pod \"obo-prometheus-operator-68bc856cb9-bdznk\" (UID: \"35e8e577-768b-425e-ae5e-74f9f4710566\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.367762 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.368929 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.371797 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.372307 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-t8hgs" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.384976 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.385976 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.395692 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.401901 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.452728 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4c617c2-8b14-4e9c-8a40-ab1353beeb33-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh\" (UID: \"c4c617c2-8b14-4e9c-8a40-ab1353beeb33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.452811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/812a6b90-9a07-4f7f-864d-baa13b5ab210-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-vk825\" (UID: \"812a6b90-9a07-4f7f-864d-baa13b5ab210\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.452846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4c617c2-8b14-4e9c-8a40-ab1353beeb33-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh\" (UID: \"c4c617c2-8b14-4e9c-8a40-ab1353beeb33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.452934 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/812a6b90-9a07-4f7f-864d-baa13b5ab210-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-vk825\" (UID: \"812a6b90-9a07-4f7f-864d-baa13b5ab210\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.453005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd78v\" (UniqueName: \"kubernetes.io/projected/35e8e577-768b-425e-ae5e-74f9f4710566-kube-api-access-vd78v\") pod \"obo-prometheus-operator-68bc856cb9-bdznk\" (UID: \"35e8e577-768b-425e-ae5e-74f9f4710566\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.486507 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd78v\" (UniqueName: \"kubernetes.io/projected/35e8e577-768b-425e-ae5e-74f9f4710566-kube-api-access-vd78v\") pod \"obo-prometheus-operator-68bc856cb9-bdznk\" (UID: \"35e8e577-768b-425e-ae5e-74f9f4710566\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.554203 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/812a6b90-9a07-4f7f-864d-baa13b5ab210-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-vk825\" (UID: \"812a6b90-9a07-4f7f-864d-baa13b5ab210\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.554323 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4c617c2-8b14-4e9c-8a40-ab1353beeb33-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh\" (UID: \"c4c617c2-8b14-4e9c-8a40-ab1353beeb33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.554352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/812a6b90-9a07-4f7f-864d-baa13b5ab210-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-vk825\" (UID: \"812a6b90-9a07-4f7f-864d-baa13b5ab210\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.554376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4c617c2-8b14-4e9c-8a40-ab1353beeb33-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh\" (UID: \"c4c617c2-8b14-4e9c-8a40-ab1353beeb33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.560070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c4c617c2-8b14-4e9c-8a40-ab1353beeb33-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh\" (UID: \"c4c617c2-8b14-4e9c-8a40-ab1353beeb33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.560593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c4c617c2-8b14-4e9c-8a40-ab1353beeb33-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh\" (UID: \"c4c617c2-8b14-4e9c-8a40-ab1353beeb33\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.560610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/812a6b90-9a07-4f7f-864d-baa13b5ab210-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-vk825\" (UID: \"812a6b90-9a07-4f7f-864d-baa13b5ab210\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.565923 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.571112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/812a6b90-9a07-4f7f-864d-baa13b5ab210-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57c849b6b8-vk825\" (UID: \"812a6b90-9a07-4f7f-864d-baa13b5ab210\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.578760 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dj5bj"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.579797 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.583775 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-cnfcr" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.589838 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.612477 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dj5bj"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.658357 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/40809707-fd14-4599-a0ac-0bcb0c90661d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dj5bj\" (UID: \"40809707-fd14-4599-a0ac-0bcb0c90661d\") " pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.658420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvmc8\" (UniqueName: \"kubernetes.io/projected/40809707-fd14-4599-a0ac-0bcb0c90661d-kube-api-access-tvmc8\") pod \"observability-operator-59bdc8b94-dj5bj\" (UID: \"40809707-fd14-4599-a0ac-0bcb0c90661d\") " pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.686007 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.703191 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.759887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/40809707-fd14-4599-a0ac-0bcb0c90661d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dj5bj\" (UID: \"40809707-fd14-4599-a0ac-0bcb0c90661d\") " pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.759944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvmc8\" (UniqueName: \"kubernetes.io/projected/40809707-fd14-4599-a0ac-0bcb0c90661d-kube-api-access-tvmc8\") pod \"observability-operator-59bdc8b94-dj5bj\" (UID: \"40809707-fd14-4599-a0ac-0bcb0c90661d\") " pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.773571 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/40809707-fd14-4599-a0ac-0bcb0c90661d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dj5bj\" (UID: \"40809707-fd14-4599-a0ac-0bcb0c90661d\") " pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.781985 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nfc2q"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.782449 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvmc8\" (UniqueName: \"kubernetes.io/projected/40809707-fd14-4599-a0ac-0bcb0c90661d-kube-api-access-tvmc8\") pod \"observability-operator-59bdc8b94-dj5bj\" (UID: \"40809707-fd14-4599-a0ac-0bcb0c90661d\") " pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.783005 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.788965 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-cp9vn" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.793488 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nfc2q"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.861338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c0cbb64-d018-496a-a983-8c4761f142ed-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nfc2q\" (UID: \"3c0cbb64-d018-496a-a983-8c4761f142ed\") " pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.861387 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg49b\" (UniqueName: \"kubernetes.io/projected/3c0cbb64-d018-496a-a983-8c4761f142ed-kube-api-access-mg49b\") pod \"perses-operator-5bf474d74f-nfc2q\" (UID: \"3c0cbb64-d018-496a-a983-8c4761f142ed\") " pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.928020 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk"] Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.964294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c0cbb64-d018-496a-a983-8c4761f142ed-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nfc2q\" (UID: \"3c0cbb64-d018-496a-a983-8c4761f142ed\") " pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.964342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg49b\" (UniqueName: \"kubernetes.io/projected/3c0cbb64-d018-496a-a983-8c4761f142ed-kube-api-access-mg49b\") pod \"perses-operator-5bf474d74f-nfc2q\" (UID: \"3c0cbb64-d018-496a-a983-8c4761f142ed\") " pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.965653 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c0cbb64-d018-496a-a983-8c4761f142ed-openshift-service-ca\") pod \"perses-operator-5bf474d74f-nfc2q\" (UID: \"3c0cbb64-d018-496a-a983-8c4761f142ed\") " pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.971707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:45 crc kubenswrapper[4858]: I0127 20:21:45.990631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg49b\" (UniqueName: \"kubernetes.io/projected/3c0cbb64-d018-496a-a983-8c4761f142ed-kube-api-access-mg49b\") pod \"perses-operator-5bf474d74f-nfc2q\" (UID: \"3c0cbb64-d018-496a-a983-8c4761f142ed\") " pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:46 crc kubenswrapper[4858]: I0127 20:21:46.050106 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" event={"ID":"35e8e577-768b-425e-ae5e-74f9f4710566","Type":"ContainerStarted","Data":"677572270d81679c260ab726ba71fb8bebde6eb76ca9ec437287687828f44d9b"} Jan 27 20:21:46 crc kubenswrapper[4858]: I0127 20:21:46.067424 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh"] Jan 27 20:21:46 crc kubenswrapper[4858]: W0127 20:21:46.115165 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4c617c2_8b14_4e9c_8a40_ab1353beeb33.slice/crio-5b4c1d6172eec65a98786e30581f6e458c882f4370104523d0e705e9d79e16b7 WatchSource:0}: Error finding container 5b4c1d6172eec65a98786e30581f6e458c882f4370104523d0e705e9d79e16b7: Status 404 returned error can't find the container with id 5b4c1d6172eec65a98786e30581f6e458c882f4370104523d0e705e9d79e16b7 Jan 27 20:21:46 crc kubenswrapper[4858]: I0127 20:21:46.119180 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:46 crc kubenswrapper[4858]: I0127 20:21:46.181613 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825"] Jan 27 20:21:46 crc kubenswrapper[4858]: I0127 20:21:46.314118 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dj5bj"] Jan 27 20:21:46 crc kubenswrapper[4858]: W0127 20:21:46.330501 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40809707_fd14_4599_a0ac_0bcb0c90661d.slice/crio-da5cf720f23e450420ad284f70b487f8fec0cc7c23c6583d5a147109a07fe8cb WatchSource:0}: Error finding container da5cf720f23e450420ad284f70b487f8fec0cc7c23c6583d5a147109a07fe8cb: Status 404 returned error can't find the container with id da5cf720f23e450420ad284f70b487f8fec0cc7c23c6583d5a147109a07fe8cb Jan 27 20:21:46 crc kubenswrapper[4858]: I0127 20:21:46.420702 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-nfc2q"] Jan 27 20:21:46 crc kubenswrapper[4858]: W0127 20:21:46.427599 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c0cbb64_d018_496a_a983_8c4761f142ed.slice/crio-9c693aabc056d95533acc20fdcbf923eae326b2746c6a8d0cb86709bf0d12ea4 WatchSource:0}: Error finding container 9c693aabc056d95533acc20fdcbf923eae326b2746c6a8d0cb86709bf0d12ea4: Status 404 returned error can't find the container with id 9c693aabc056d95533acc20fdcbf923eae326b2746c6a8d0cb86709bf0d12ea4 Jan 27 20:21:47 crc kubenswrapper[4858]: I0127 20:21:47.058130 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" event={"ID":"3c0cbb64-d018-496a-a983-8c4761f142ed","Type":"ContainerStarted","Data":"9c693aabc056d95533acc20fdcbf923eae326b2746c6a8d0cb86709bf0d12ea4"} Jan 27 20:21:47 crc kubenswrapper[4858]: I0127 20:21:47.059505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" event={"ID":"c4c617c2-8b14-4e9c-8a40-ab1353beeb33","Type":"ContainerStarted","Data":"5b4c1d6172eec65a98786e30581f6e458c882f4370104523d0e705e9d79e16b7"} Jan 27 20:21:47 crc kubenswrapper[4858]: I0127 20:21:47.060412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" event={"ID":"40809707-fd14-4599-a0ac-0bcb0c90661d","Type":"ContainerStarted","Data":"da5cf720f23e450420ad284f70b487f8fec0cc7c23c6583d5a147109a07fe8cb"} Jan 27 20:21:47 crc kubenswrapper[4858]: I0127 20:21:47.061200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" event={"ID":"812a6b90-9a07-4f7f-864d-baa13b5ab210","Type":"ContainerStarted","Data":"7171f2916073d429297937ee9ea5190f75c191c7ad2836781040aa893b04e65e"} Jan 27 20:21:53 crc kubenswrapper[4858]: I0127 20:21:53.356700 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:53 crc kubenswrapper[4858]: I0127 20:21:53.430434 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:54 crc kubenswrapper[4858]: I0127 20:21:54.180832 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4j5t"] Jan 27 20:21:55 crc kubenswrapper[4858]: I0127 20:21:55.142233 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t4j5t" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="registry-server" containerID="cri-o://e0a96eee9e21419f1566d6b5d958796cfeeaa6c7dda653a89e701fe94de7bbd9" gracePeriod=2 Jan 27 20:21:56 crc kubenswrapper[4858]: I0127 20:21:56.153093 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerID="e0a96eee9e21419f1566d6b5d958796cfeeaa6c7dda653a89e701fe94de7bbd9" exitCode=0 Jan 27 20:21:56 crc kubenswrapper[4858]: I0127 20:21:56.153143 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4j5t" event={"ID":"7c884e31-2c47-4fa0-9431-f7c66bf30aab","Type":"ContainerDied","Data":"e0a96eee9e21419f1566d6b5d958796cfeeaa6c7dda653a89e701fe94de7bbd9"} Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.469486 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.522438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkhrk\" (UniqueName: \"kubernetes.io/projected/7c884e31-2c47-4fa0-9431-f7c66bf30aab-kube-api-access-wkhrk\") pod \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.522506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-utilities\") pod \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.522726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-catalog-content\") pod \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\" (UID: \"7c884e31-2c47-4fa0-9431-f7c66bf30aab\") " Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.523713 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-utilities" (OuterVolumeSpecName: "utilities") pod "7c884e31-2c47-4fa0-9431-f7c66bf30aab" (UID: "7c884e31-2c47-4fa0-9431-f7c66bf30aab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.530123 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c884e31-2c47-4fa0-9431-f7c66bf30aab-kube-api-access-wkhrk" (OuterVolumeSpecName: "kube-api-access-wkhrk") pod "7c884e31-2c47-4fa0-9431-f7c66bf30aab" (UID: "7c884e31-2c47-4fa0-9431-f7c66bf30aab"). InnerVolumeSpecName "kube-api-access-wkhrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.624088 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkhrk\" (UniqueName: \"kubernetes.io/projected/7c884e31-2c47-4fa0-9431-f7c66bf30aab-kube-api-access-wkhrk\") on node \"crc\" DevicePath \"\"" Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.624123 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.665412 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c884e31-2c47-4fa0-9431-f7c66bf30aab" (UID: "7c884e31-2c47-4fa0-9431-f7c66bf30aab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:21:58 crc kubenswrapper[4858]: I0127 20:21:58.724901 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c884e31-2c47-4fa0-9431-f7c66bf30aab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.173677 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4j5t" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.173696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4j5t" event={"ID":"7c884e31-2c47-4fa0-9431-f7c66bf30aab","Type":"ContainerDied","Data":"e4e63bb457ca17c4c79a859c7e4245cf81ecd4f8a53a56288f752beb5da04b27"} Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.173749 4858 scope.go:117] "RemoveContainer" containerID="e0a96eee9e21419f1566d6b5d958796cfeeaa6c7dda653a89e701fe94de7bbd9" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.179019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" event={"ID":"40809707-fd14-4599-a0ac-0bcb0c90661d","Type":"ContainerStarted","Data":"af0eb88cd8b0cb3bf9843570a8b7246cec757e1cafe207a625e6a278975b58f1"} Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.180357 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.183736 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.186129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" event={"ID":"812a6b90-9a07-4f7f-864d-baa13b5ab210","Type":"ContainerStarted","Data":"56629b4887d361c564730609b6184027f7b6cac4cd6fd2e09c7745918d992505"} Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.189522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" event={"ID":"3c0cbb64-d018-496a-a983-8c4761f142ed","Type":"ContainerStarted","Data":"ac8d340afb190f6136158aa4e0b22490ca74b2ce86da0bd9ed42f1e19872df3e"} Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.189666 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.191431 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" event={"ID":"35e8e577-768b-425e-ae5e-74f9f4710566","Type":"ContainerStarted","Data":"7ac0540a02226e7fa5d9b2c974722de7c9683eca5645f39f5eba435053e4dadb"} Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.193028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" event={"ID":"c4c617c2-8b14-4e9c-8a40-ab1353beeb33","Type":"ContainerStarted","Data":"665446111e7bbbef697e10b413a2e8fd33b9de5a48f347c33c442e701d18f45a"} Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.193466 4858 scope.go:117] "RemoveContainer" containerID="2fedaf98e374daf238ef650d27ade6cde3890e6c2f3e040c59d5abfd88dfaaff" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.207368 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-dj5bj" podStartSLOduration=2.324442251 podStartE2EDuration="14.20735373s" podCreationTimestamp="2026-01-27 20:21:45 +0000 UTC" firstStartedPulling="2026-01-27 20:21:46.339192186 +0000 UTC m=+851.047007892" lastFinishedPulling="2026-01-27 20:21:58.222103665 +0000 UTC m=+862.929919371" observedRunningTime="2026-01-27 20:21:59.204961361 +0000 UTC m=+863.912777057" watchObservedRunningTime="2026-01-27 20:21:59.20735373 +0000 UTC m=+863.915169436" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.218882 4858 scope.go:117] "RemoveContainer" containerID="9d6e2ba3e2947f132fd95619770d8057ea25bb57c11dab28fb97bdb3a44c43bc" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.295186 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" podStartSLOduration=2.546238341 podStartE2EDuration="14.295170856s" podCreationTimestamp="2026-01-27 20:21:45 +0000 UTC" firstStartedPulling="2026-01-27 20:21:46.430027409 +0000 UTC m=+851.137843115" lastFinishedPulling="2026-01-27 20:21:58.178959934 +0000 UTC m=+862.886775630" observedRunningTime="2026-01-27 20:21:59.292274272 +0000 UTC m=+864.000089988" watchObservedRunningTime="2026-01-27 20:21:59.295170856 +0000 UTC m=+864.002986562" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.307475 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4j5t"] Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.310669 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t4j5t"] Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.329792 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh" podStartSLOduration=2.296306935 podStartE2EDuration="14.329773709s" podCreationTimestamp="2026-01-27 20:21:45 +0000 UTC" firstStartedPulling="2026-01-27 20:21:46.145798649 +0000 UTC m=+850.853614355" lastFinishedPulling="2026-01-27 20:21:58.179265423 +0000 UTC m=+862.887081129" observedRunningTime="2026-01-27 20:21:59.329274795 +0000 UTC m=+864.037090521" watchObservedRunningTime="2026-01-27 20:21:59.329773709 +0000 UTC m=+864.037589415" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.358894 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57c849b6b8-vk825" podStartSLOduration=2.3357693680000002 podStartE2EDuration="14.358877823s" podCreationTimestamp="2026-01-27 20:21:45 +0000 UTC" firstStartedPulling="2026-01-27 20:21:46.181792812 +0000 UTC m=+850.889608518" lastFinishedPulling="2026-01-27 20:21:58.204901267 +0000 UTC m=+862.912716973" observedRunningTime="2026-01-27 20:21:59.35602946 +0000 UTC m=+864.063845166" watchObservedRunningTime="2026-01-27 20:21:59.358877823 +0000 UTC m=+864.066693529" Jan 27 20:21:59 crc kubenswrapper[4858]: I0127 20:21:59.377052 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-bdznk" podStartSLOduration=2.122167336 podStartE2EDuration="14.377036009s" podCreationTimestamp="2026-01-27 20:21:45 +0000 UTC" firstStartedPulling="2026-01-27 20:21:45.960298931 +0000 UTC m=+850.668114637" lastFinishedPulling="2026-01-27 20:21:58.215167604 +0000 UTC m=+862.922983310" observedRunningTime="2026-01-27 20:21:59.375807244 +0000 UTC m=+864.083622950" watchObservedRunningTime="2026-01-27 20:21:59.377036009 +0000 UTC m=+864.084851715" Jan 27 20:22:00 crc kubenswrapper[4858]: I0127 20:22:00.078741 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" path="/var/lib/kubelet/pods/7c884e31-2c47-4fa0-9431-f7c66bf30aab/volumes" Jan 27 20:22:06 crc kubenswrapper[4858]: I0127 20:22:06.121171 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-nfc2q" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.593211 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2"] Jan 27 20:22:24 crc kubenswrapper[4858]: E0127 20:22:24.593969 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="registry-server" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.593984 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="registry-server" Jan 27 20:22:24 crc kubenswrapper[4858]: E0127 20:22:24.593996 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="extract-content" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.594003 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="extract-content" Jan 27 20:22:24 crc kubenswrapper[4858]: E0127 20:22:24.594015 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="extract-utilities" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.594024 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="extract-utilities" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.594137 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c884e31-2c47-4fa0-9431-f7c66bf30aab" containerName="registry-server" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.594975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.597086 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.602744 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2"] Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.749025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.749643 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plnxv\" (UniqueName: \"kubernetes.io/projected/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-kube-api-access-plnxv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.749694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.851616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plnxv\" (UniqueName: \"kubernetes.io/projected/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-kube-api-access-plnxv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.851700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.851769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.852532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.852742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.875276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plnxv\" (UniqueName: \"kubernetes.io/projected/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-kube-api-access-plnxv\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:24 crc kubenswrapper[4858]: I0127 20:22:24.910968 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:25 crc kubenswrapper[4858]: I0127 20:22:25.417765 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2"] Jan 27 20:22:25 crc kubenswrapper[4858]: I0127 20:22:25.470512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" event={"ID":"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5","Type":"ContainerStarted","Data":"8634c94266bc17ae3eef01c1154c8279f30feb9724c319b52fc4062427364540"} Jan 27 20:22:26 crc kubenswrapper[4858]: I0127 20:22:26.482177 4858 generic.go:334] "Generic (PLEG): container finished" podID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerID="0c61f9b323f268c308d3e3b1a7e514620799d5711df0b3d5cba8437036aad591" exitCode=0 Jan 27 20:22:26 crc kubenswrapper[4858]: I0127 20:22:26.482833 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" event={"ID":"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5","Type":"ContainerDied","Data":"0c61f9b323f268c308d3e3b1a7e514620799d5711df0b3d5cba8437036aad591"} Jan 27 20:22:29 crc kubenswrapper[4858]: I0127 20:22:29.328767 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:22:29 crc kubenswrapper[4858]: I0127 20:22:29.329099 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:22:31 crc kubenswrapper[4858]: I0127 20:22:31.523877 4858 generic.go:334] "Generic (PLEG): container finished" podID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerID="f785f5403e6e6288db71fd8e7e7ea809c0269a5e14dff4e5e9a5ec03ee17eced" exitCode=0 Jan 27 20:22:31 crc kubenswrapper[4858]: I0127 20:22:31.525954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" event={"ID":"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5","Type":"ContainerDied","Data":"f785f5403e6e6288db71fd8e7e7ea809c0269a5e14dff4e5e9a5ec03ee17eced"} Jan 27 20:22:32 crc kubenswrapper[4858]: I0127 20:22:32.533174 4858 generic.go:334] "Generic (PLEG): container finished" podID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerID="f410c1051e3ca6bc1de8410f15afa0dc58d2ac8d2a83830225f6fae221ff0ee0" exitCode=0 Jan 27 20:22:32 crc kubenswrapper[4858]: I0127 20:22:32.533239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" event={"ID":"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5","Type":"ContainerDied","Data":"f410c1051e3ca6bc1de8410f15afa0dc58d2ac8d2a83830225f6fae221ff0ee0"} Jan 27 20:22:33 crc kubenswrapper[4858]: I0127 20:22:33.793528 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:33 crc kubenswrapper[4858]: I0127 20:22:33.978333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plnxv\" (UniqueName: \"kubernetes.io/projected/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-kube-api-access-plnxv\") pod \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " Jan 27 20:22:33 crc kubenswrapper[4858]: I0127 20:22:33.978392 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-bundle\") pod \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " Jan 27 20:22:33 crc kubenswrapper[4858]: I0127 20:22:33.978422 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-util\") pod \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\" (UID: \"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5\") " Jan 27 20:22:33 crc kubenswrapper[4858]: I0127 20:22:33.979507 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-bundle" (OuterVolumeSpecName: "bundle") pod "40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" (UID: "40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:22:33 crc kubenswrapper[4858]: I0127 20:22:33.986817 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-kube-api-access-plnxv" (OuterVolumeSpecName: "kube-api-access-plnxv") pod "40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" (UID: "40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5"). InnerVolumeSpecName "kube-api-access-plnxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:22:33 crc kubenswrapper[4858]: I0127 20:22:33.990227 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-util" (OuterVolumeSpecName: "util") pod "40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" (UID: "40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:22:34 crc kubenswrapper[4858]: I0127 20:22:34.080178 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plnxv\" (UniqueName: \"kubernetes.io/projected/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-kube-api-access-plnxv\") on node \"crc\" DevicePath \"\"" Jan 27 20:22:34 crc kubenswrapper[4858]: I0127 20:22:34.080218 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:22:34 crc kubenswrapper[4858]: I0127 20:22:34.080229 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5-util\") on node \"crc\" DevicePath \"\"" Jan 27 20:22:34 crc kubenswrapper[4858]: I0127 20:22:34.551214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" event={"ID":"40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5","Type":"ContainerDied","Data":"8634c94266bc17ae3eef01c1154c8279f30feb9724c319b52fc4062427364540"} Jan 27 20:22:34 crc kubenswrapper[4858]: I0127 20:22:34.551274 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8634c94266bc17ae3eef01c1154c8279f30feb9724c319b52fc4062427364540" Jan 27 20:22:34 crc kubenswrapper[4858]: I0127 20:22:34.551304 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.203275 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9z7zh"] Jan 27 20:22:41 crc kubenswrapper[4858]: E0127 20:22:41.204475 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerName="extract" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.204494 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerName="extract" Jan 27 20:22:41 crc kubenswrapper[4858]: E0127 20:22:41.204506 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerName="util" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.204513 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerName="util" Jan 27 20:22:41 crc kubenswrapper[4858]: E0127 20:22:41.204538 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerName="pull" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.204564 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerName="pull" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.204670 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5" containerName="extract" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.205212 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.207533 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.207572 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-g75dd" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.207719 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.219074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5w9k\" (UniqueName: \"kubernetes.io/projected/613b924b-b7a1-4507-94ed-be8377c1d87d-kube-api-access-r5w9k\") pod \"nmstate-operator-646758c888-9z7zh\" (UID: \"613b924b-b7a1-4507-94ed-be8377c1d87d\") " pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.225336 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9z7zh"] Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.321176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5w9k\" (UniqueName: \"kubernetes.io/projected/613b924b-b7a1-4507-94ed-be8377c1d87d-kube-api-access-r5w9k\") pod \"nmstate-operator-646758c888-9z7zh\" (UID: \"613b924b-b7a1-4507-94ed-be8377c1d87d\") " pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.344582 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5w9k\" (UniqueName: \"kubernetes.io/projected/613b924b-b7a1-4507-94ed-be8377c1d87d-kube-api-access-r5w9k\") pod \"nmstate-operator-646758c888-9z7zh\" (UID: \"613b924b-b7a1-4507-94ed-be8377c1d87d\") " pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.525633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" Jan 27 20:22:41 crc kubenswrapper[4858]: I0127 20:22:41.967621 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-9z7zh"] Jan 27 20:22:42 crc kubenswrapper[4858]: I0127 20:22:42.599729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" event={"ID":"613b924b-b7a1-4507-94ed-be8377c1d87d","Type":"ContainerStarted","Data":"798a7143cee8e9e8446043c79c2a6aa0e77b752a5906dd0aa458de052f2d2f83"} Jan 27 20:22:44 crc kubenswrapper[4858]: I0127 20:22:44.616063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" event={"ID":"613b924b-b7a1-4507-94ed-be8377c1d87d","Type":"ContainerStarted","Data":"6ee156a3fd220280af1bf6c15468b6494616a09ca801d58d9c6d19522ce575af"} Jan 27 20:22:44 crc kubenswrapper[4858]: I0127 20:22:44.637649 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-9z7zh" podStartSLOduration=1.789221909 podStartE2EDuration="3.637627988s" podCreationTimestamp="2026-01-27 20:22:41 +0000 UTC" firstStartedPulling="2026-01-27 20:22:41.98106104 +0000 UTC m=+906.688876736" lastFinishedPulling="2026-01-27 20:22:43.829467109 +0000 UTC m=+908.537282815" observedRunningTime="2026-01-27 20:22:44.634680853 +0000 UTC m=+909.342496569" watchObservedRunningTime="2026-01-27 20:22:44.637627988 +0000 UTC m=+909.345443714" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.793765 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dqkn5"] Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.796434 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.798799 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bmm9v" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.801414 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p"] Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.802433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.805291 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.810087 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dqkn5"] Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.828406 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xxvgs"] Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.829681 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.839521 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p"] Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.905107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-nmstate-lock\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.905362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-ovs-socket\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.905457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqhg\" (UniqueName: \"kubernetes.io/projected/a9cfc031-eed0-42fd-94cc-707c19c84cae-kube-api-access-6bqhg\") pod \"nmstate-metrics-54757c584b-dqkn5\" (UID: \"a9cfc031-eed0-42fd-94cc-707c19c84cae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.905529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f00f3a98-58f2-445c-a008-290a987092a2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6bf2p\" (UID: \"f00f3a98-58f2-445c-a008-290a987092a2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.905644 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-dbus-socket\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.905718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrls\" (UniqueName: \"kubernetes.io/projected/f00f3a98-58f2-445c-a008-290a987092a2-kube-api-access-bjrls\") pod \"nmstate-webhook-8474b5b9d8-6bf2p\" (UID: \"f00f3a98-58f2-445c-a008-290a987092a2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.905796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clp6s\" (UniqueName: \"kubernetes.io/projected/7bfb1746-53f8-427e-ab49-1b84279b9437-kube-api-access-clp6s\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.955889 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5"] Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.956910 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.959412 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.960225 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.961660 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hnx4s" Jan 27 20:22:45 crc kubenswrapper[4858]: I0127 20:22:45.963667 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5"] Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.007575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-ovs-socket\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.007683 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqhg\" (UniqueName: \"kubernetes.io/projected/a9cfc031-eed0-42fd-94cc-707c19c84cae-kube-api-access-6bqhg\") pod \"nmstate-metrics-54757c584b-dqkn5\" (UID: \"a9cfc031-eed0-42fd-94cc-707c19c84cae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.007714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f00f3a98-58f2-445c-a008-290a987092a2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6bf2p\" (UID: \"f00f3a98-58f2-445c-a008-290a987092a2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.007719 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-ovs-socket\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.007736 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-dbus-socket\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.007880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee1edac-ca66-4ed5-a281-67b735710be5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: E0127 20:22:46.007928 4858 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.007939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrls\" (UniqueName: \"kubernetes.io/projected/f00f3a98-58f2-445c-a008-290a987092a2-kube-api-access-bjrls\") pod \"nmstate-webhook-8474b5b9d8-6bf2p\" (UID: \"f00f3a98-58f2-445c-a008-290a987092a2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:46 crc kubenswrapper[4858]: E0127 20:22:46.008021 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f00f3a98-58f2-445c-a008-290a987092a2-tls-key-pair podName:f00f3a98-58f2-445c-a008-290a987092a2 nodeName:}" failed. No retries permitted until 2026-01-27 20:22:46.507989809 +0000 UTC m=+911.215805515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/f00f3a98-58f2-445c-a008-290a987092a2-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-6bf2p" (UID: "f00f3a98-58f2-445c-a008-290a987092a2") : secret "openshift-nmstate-webhook" not found Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.008029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-dbus-socket\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.008115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhr9v\" (UniqueName: \"kubernetes.io/projected/8ee1edac-ca66-4ed5-a281-67b735710be5-kube-api-access-vhr9v\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.008178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clp6s\" (UniqueName: \"kubernetes.io/projected/7bfb1746-53f8-427e-ab49-1b84279b9437-kube-api-access-clp6s\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.008269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8ee1edac-ca66-4ed5-a281-67b735710be5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.008395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-nmstate-lock\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.008510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7bfb1746-53f8-427e-ab49-1b84279b9437-nmstate-lock\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.030608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clp6s\" (UniqueName: \"kubernetes.io/projected/7bfb1746-53f8-427e-ab49-1b84279b9437-kube-api-access-clp6s\") pod \"nmstate-handler-xxvgs\" (UID: \"7bfb1746-53f8-427e-ab49-1b84279b9437\") " pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.031676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrls\" (UniqueName: \"kubernetes.io/projected/f00f3a98-58f2-445c-a008-290a987092a2-kube-api-access-bjrls\") pod \"nmstate-webhook-8474b5b9d8-6bf2p\" (UID: \"f00f3a98-58f2-445c-a008-290a987092a2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.045850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqhg\" (UniqueName: \"kubernetes.io/projected/a9cfc031-eed0-42fd-94cc-707c19c84cae-kube-api-access-6bqhg\") pod \"nmstate-metrics-54757c584b-dqkn5\" (UID: \"a9cfc031-eed0-42fd-94cc-707c19c84cae\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.109707 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee1edac-ca66-4ed5-a281-67b735710be5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.109772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhr9v\" (UniqueName: \"kubernetes.io/projected/8ee1edac-ca66-4ed5-a281-67b735710be5-kube-api-access-vhr9v\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.109797 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8ee1edac-ca66-4ed5-a281-67b735710be5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: E0127 20:22:46.109989 4858 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 20:22:46 crc kubenswrapper[4858]: E0127 20:22:46.110112 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee1edac-ca66-4ed5-a281-67b735710be5-plugin-serving-cert podName:8ee1edac-ca66-4ed5-a281-67b735710be5 nodeName:}" failed. No retries permitted until 2026-01-27 20:22:46.610085119 +0000 UTC m=+911.317900995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/8ee1edac-ca66-4ed5-a281-67b735710be5-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-pjzt5" (UID: "8ee1edac-ca66-4ed5-a281-67b735710be5") : secret "plugin-serving-cert" not found Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.110782 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8ee1edac-ca66-4ed5-a281-67b735710be5-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.115754 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.137164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhr9v\" (UniqueName: \"kubernetes.io/projected/8ee1edac-ca66-4ed5-a281-67b735710be5-kube-api-access-vhr9v\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.165447 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.177883 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-54f4fcfcbd-79d92"] Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.179425 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.198475 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54f4fcfcbd-79d92"] Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.210833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/26199327-bbf4-4181-a80f-a232025e77b8-console-serving-cert\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.210927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-trusted-ca-bundle\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.210973 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg28x\" (UniqueName: \"kubernetes.io/projected/26199327-bbf4-4181-a80f-a232025e77b8-kube-api-access-zg28x\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.210998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/26199327-bbf4-4181-a80f-a232025e77b8-console-oauth-config\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.211055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-console-config\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.211072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-oauth-serving-cert\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.211093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-service-ca\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.312102 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-trusted-ca-bundle\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.312642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg28x\" (UniqueName: \"kubernetes.io/projected/26199327-bbf4-4181-a80f-a232025e77b8-kube-api-access-zg28x\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.312666 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/26199327-bbf4-4181-a80f-a232025e77b8-console-oauth-config\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.312703 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-console-config\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.312720 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-oauth-serving-cert\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.312742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-service-ca\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.312774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/26199327-bbf4-4181-a80f-a232025e77b8-console-serving-cert\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.314529 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-console-config\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.315759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-oauth-serving-cert\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.318841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-service-ca\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.319096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26199327-bbf4-4181-a80f-a232025e77b8-trusted-ca-bundle\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.323409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/26199327-bbf4-4181-a80f-a232025e77b8-console-serving-cert\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.323912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/26199327-bbf4-4181-a80f-a232025e77b8-console-oauth-config\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.331607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg28x\" (UniqueName: \"kubernetes.io/projected/26199327-bbf4-4181-a80f-a232025e77b8-kube-api-access-zg28x\") pod \"console-54f4fcfcbd-79d92\" (UID: \"26199327-bbf4-4181-a80f-a232025e77b8\") " pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.384973 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-dqkn5"] Jan 27 20:22:46 crc kubenswrapper[4858]: W0127 20:22:46.389109 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9cfc031_eed0_42fd_94cc_707c19c84cae.slice/crio-9f4220bf2dec8f5baa2ce8636f36e3731603a9d237b48fd0b3a94ad63786a858 WatchSource:0}: Error finding container 9f4220bf2dec8f5baa2ce8636f36e3731603a9d237b48fd0b3a94ad63786a858: Status 404 returned error can't find the container with id 9f4220bf2dec8f5baa2ce8636f36e3731603a9d237b48fd0b3a94ad63786a858 Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.516113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f00f3a98-58f2-445c-a008-290a987092a2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6bf2p\" (UID: \"f00f3a98-58f2-445c-a008-290a987092a2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.520063 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f00f3a98-58f2-445c-a008-290a987092a2-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-6bf2p\" (UID: \"f00f3a98-58f2-445c-a008-290a987092a2\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.548286 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.626457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee1edac-ca66-4ed5-a281-67b735710be5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.631478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8ee1edac-ca66-4ed5-a281-67b735710be5-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-pjzt5\" (UID: \"8ee1edac-ca66-4ed5-a281-67b735710be5\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.633588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" event={"ID":"a9cfc031-eed0-42fd-94cc-707c19c84cae","Type":"ContainerStarted","Data":"9f4220bf2dec8f5baa2ce8636f36e3731603a9d237b48fd0b3a94ad63786a858"} Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.635086 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xxvgs" event={"ID":"7bfb1746-53f8-427e-ab49-1b84279b9437","Type":"ContainerStarted","Data":"58c66fd6e8d27e4a57b609f238b15513974af2ab99570df0559dbf3615048c92"} Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.733905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.766730 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54f4fcfcbd-79d92"] Jan 27 20:22:46 crc kubenswrapper[4858]: W0127 20:22:46.767855 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26199327_bbf4_4181_a80f_a232025e77b8.slice/crio-dd24fef9378180d220de1769e2b941c9e451a0c1105e5f9f41f3c3214d1e5a71 WatchSource:0}: Error finding container dd24fef9378180d220de1769e2b941c9e451a0c1105e5f9f41f3c3214d1e5a71: Status 404 returned error can't find the container with id dd24fef9378180d220de1769e2b941c9e451a0c1105e5f9f41f3c3214d1e5a71 Jan 27 20:22:46 crc kubenswrapper[4858]: I0127 20:22:46.878063 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" Jan 27 20:22:47 crc kubenswrapper[4858]: I0127 20:22:47.000748 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p"] Jan 27 20:22:47 crc kubenswrapper[4858]: I0127 20:22:47.121600 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5"] Jan 27 20:22:47 crc kubenswrapper[4858]: I0127 20:22:47.643069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" event={"ID":"8ee1edac-ca66-4ed5-a281-67b735710be5","Type":"ContainerStarted","Data":"f95a9b5ef13a03b985c1ecc34e256127a4150251e05aa1f580d41ffe42956fc8"} Jan 27 20:22:47 crc kubenswrapper[4858]: I0127 20:22:47.644316 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" event={"ID":"f00f3a98-58f2-445c-a008-290a987092a2","Type":"ContainerStarted","Data":"cd7c3b8a1244379037685dc305cb743d51b9b6aa412f23fdb3f341c3c09f0b08"} Jan 27 20:22:47 crc kubenswrapper[4858]: I0127 20:22:47.646361 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54f4fcfcbd-79d92" event={"ID":"26199327-bbf4-4181-a80f-a232025e77b8","Type":"ContainerStarted","Data":"b2b31164e799317b3dbd75bd6d2d087661fc40ff636898131e0f55124a24c3a2"} Jan 27 20:22:47 crc kubenswrapper[4858]: I0127 20:22:47.646442 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54f4fcfcbd-79d92" event={"ID":"26199327-bbf4-4181-a80f-a232025e77b8","Type":"ContainerStarted","Data":"dd24fef9378180d220de1769e2b941c9e451a0c1105e5f9f41f3c3214d1e5a71"} Jan 27 20:22:47 crc kubenswrapper[4858]: I0127 20:22:47.673907 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-54f4fcfcbd-79d92" podStartSLOduration=1.6738853470000001 podStartE2EDuration="1.673885347s" podCreationTimestamp="2026-01-27 20:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:22:47.671113536 +0000 UTC m=+912.378929252" watchObservedRunningTime="2026-01-27 20:22:47.673885347 +0000 UTC m=+912.381701053" Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.662125 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" event={"ID":"a9cfc031-eed0-42fd-94cc-707c19c84cae","Type":"ContainerStarted","Data":"eba74445a8ef99a1d3ee062938b651880695b81c227f4f016045c5ea4e86ccb9"} Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.664123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" event={"ID":"8ee1edac-ca66-4ed5-a281-67b735710be5","Type":"ContainerStarted","Data":"f6f0f610fde8de30cdd4785877b02ac2f4c6316d88cf87cbc635b8c4f7c868c0"} Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.665832 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" event={"ID":"f00f3a98-58f2-445c-a008-290a987092a2","Type":"ContainerStarted","Data":"2f90c86d423d4c696a028a512312e8fab90f3263ab79b943dfe85606567f5990"} Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.665917 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.668123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xxvgs" event={"ID":"7bfb1746-53f8-427e-ab49-1b84279b9437","Type":"ContainerStarted","Data":"8a7c5a0bfc9518120567fe776dada104f3ab1563849537d157287a3e6bf070c4"} Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.668267 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.684431 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-pjzt5" podStartSLOduration=2.457940936 podStartE2EDuration="4.684411456s" podCreationTimestamp="2026-01-27 20:22:45 +0000 UTC" firstStartedPulling="2026-01-27 20:22:47.139413781 +0000 UTC m=+911.847229487" lastFinishedPulling="2026-01-27 20:22:49.365884301 +0000 UTC m=+914.073700007" observedRunningTime="2026-01-27 20:22:49.682800609 +0000 UTC m=+914.390616335" watchObservedRunningTime="2026-01-27 20:22:49.684411456 +0000 UTC m=+914.392227162" Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.707029 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" podStartSLOduration=3.198823726 podStartE2EDuration="4.707002861s" podCreationTimestamp="2026-01-27 20:22:45 +0000 UTC" firstStartedPulling="2026-01-27 20:22:47.027874347 +0000 UTC m=+911.735690053" lastFinishedPulling="2026-01-27 20:22:48.536053442 +0000 UTC m=+913.243869188" observedRunningTime="2026-01-27 20:22:49.703063306 +0000 UTC m=+914.410879032" watchObservedRunningTime="2026-01-27 20:22:49.707002861 +0000 UTC m=+914.414818567" Jan 27 20:22:49 crc kubenswrapper[4858]: I0127 20:22:49.728648 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xxvgs" podStartSLOduration=2.415683541 podStartE2EDuration="4.728623748s" podCreationTimestamp="2026-01-27 20:22:45 +0000 UTC" firstStartedPulling="2026-01-27 20:22:46.215973768 +0000 UTC m=+910.923789474" lastFinishedPulling="2026-01-27 20:22:48.528913925 +0000 UTC m=+913.236729681" observedRunningTime="2026-01-27 20:22:49.727723131 +0000 UTC m=+914.435538847" watchObservedRunningTime="2026-01-27 20:22:49.728623748 +0000 UTC m=+914.436439454" Jan 27 20:22:51 crc kubenswrapper[4858]: I0127 20:22:51.690936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" event={"ID":"a9cfc031-eed0-42fd-94cc-707c19c84cae","Type":"ContainerStarted","Data":"36407b2c8fa894d67b1d3ca59c64378508baa7d1cf28e5479332ba1d3b4ae483"} Jan 27 20:22:51 crc kubenswrapper[4858]: I0127 20:22:51.707245 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-dqkn5" podStartSLOduration=2.199593286 podStartE2EDuration="6.707224971s" podCreationTimestamp="2026-01-27 20:22:45 +0000 UTC" firstStartedPulling="2026-01-27 20:22:46.391364724 +0000 UTC m=+911.099180430" lastFinishedPulling="2026-01-27 20:22:50.898996409 +0000 UTC m=+915.606812115" observedRunningTime="2026-01-27 20:22:51.705616234 +0000 UTC m=+916.413431960" watchObservedRunningTime="2026-01-27 20:22:51.707224971 +0000 UTC m=+916.415040677" Jan 27 20:22:56 crc kubenswrapper[4858]: I0127 20:22:56.197877 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xxvgs" Jan 27 20:22:56 crc kubenswrapper[4858]: I0127 20:22:56.549077 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:56 crc kubenswrapper[4858]: I0127 20:22:56.550994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:56 crc kubenswrapper[4858]: I0127 20:22:56.555801 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:56 crc kubenswrapper[4858]: I0127 20:22:56.749928 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-54f4fcfcbd-79d92" Jan 27 20:22:56 crc kubenswrapper[4858]: I0127 20:22:56.838958 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p72qt"] Jan 27 20:22:59 crc kubenswrapper[4858]: I0127 20:22:59.328971 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:22:59 crc kubenswrapper[4858]: I0127 20:22:59.329675 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:23:06 crc kubenswrapper[4858]: I0127 20:23:06.741581 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-6bf2p" Jan 27 20:23:21 crc kubenswrapper[4858]: I0127 20:23:21.887403 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-p72qt" podUID="25775e1a-346e-4b05-ae25-819a5aad12b7" containerName="console" containerID="cri-o://e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749" gracePeriod=15 Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.353897 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p72qt_25775e1a-346e-4b05-ae25-819a5aad12b7/console/0.log" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.354322 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.543850 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-serving-cert\") pod \"25775e1a-346e-4b05-ae25-819a5aad12b7\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.543927 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4kx8\" (UniqueName: \"kubernetes.io/projected/25775e1a-346e-4b05-ae25-819a5aad12b7-kube-api-access-v4kx8\") pod \"25775e1a-346e-4b05-ae25-819a5aad12b7\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.543977 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-oauth-serving-cert\") pod \"25775e1a-346e-4b05-ae25-819a5aad12b7\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.544003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-service-ca\") pod \"25775e1a-346e-4b05-ae25-819a5aad12b7\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.544019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-console-config\") pod \"25775e1a-346e-4b05-ae25-819a5aad12b7\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.544057 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-oauth-config\") pod \"25775e1a-346e-4b05-ae25-819a5aad12b7\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.544102 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-trusted-ca-bundle\") pod \"25775e1a-346e-4b05-ae25-819a5aad12b7\" (UID: \"25775e1a-346e-4b05-ae25-819a5aad12b7\") " Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.545161 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "25775e1a-346e-4b05-ae25-819a5aad12b7" (UID: "25775e1a-346e-4b05-ae25-819a5aad12b7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.545184 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-service-ca" (OuterVolumeSpecName: "service-ca") pod "25775e1a-346e-4b05-ae25-819a5aad12b7" (UID: "25775e1a-346e-4b05-ae25-819a5aad12b7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.545209 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-console-config" (OuterVolumeSpecName: "console-config") pod "25775e1a-346e-4b05-ae25-819a5aad12b7" (UID: "25775e1a-346e-4b05-ae25-819a5aad12b7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.545269 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "25775e1a-346e-4b05-ae25-819a5aad12b7" (UID: "25775e1a-346e-4b05-ae25-819a5aad12b7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.553832 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25775e1a-346e-4b05-ae25-819a5aad12b7-kube-api-access-v4kx8" (OuterVolumeSpecName: "kube-api-access-v4kx8") pod "25775e1a-346e-4b05-ae25-819a5aad12b7" (UID: "25775e1a-346e-4b05-ae25-819a5aad12b7"). InnerVolumeSpecName "kube-api-access-v4kx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.554780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "25775e1a-346e-4b05-ae25-819a5aad12b7" (UID: "25775e1a-346e-4b05-ae25-819a5aad12b7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.557844 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "25775e1a-346e-4b05-ae25-819a5aad12b7" (UID: "25775e1a-346e-4b05-ae25-819a5aad12b7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.629915 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw"] Jan 27 20:23:22 crc kubenswrapper[4858]: E0127 20:23:22.630348 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25775e1a-346e-4b05-ae25-819a5aad12b7" containerName="console" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.630369 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="25775e1a-346e-4b05-ae25-819a5aad12b7" containerName="console" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.630525 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="25775e1a-346e-4b05-ae25-819a5aad12b7" containerName="console" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.631765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.633629 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.638502 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw"] Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646110 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pbm\" (UniqueName: \"kubernetes.io/projected/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-kube-api-access-x7pbm\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646236 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646249 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4kx8\" (UniqueName: \"kubernetes.io/projected/25775e1a-346e-4b05-ae25-819a5aad12b7-kube-api-access-v4kx8\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646258 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646269 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646278 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646286 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25775e1a-346e-4b05-ae25-819a5aad12b7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.646294 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25775e1a-346e-4b05-ae25-819a5aad12b7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.747620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.747703 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.747745 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pbm\" (UniqueName: \"kubernetes.io/projected/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-kube-api-access-x7pbm\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.748147 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.748250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.764430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pbm\" (UniqueName: \"kubernetes.io/projected/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-kube-api-access-x7pbm\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.949588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.955264 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p72qt_25775e1a-346e-4b05-ae25-819a5aad12b7/console/0.log" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.955364 4858 generic.go:334] "Generic (PLEG): container finished" podID="25775e1a-346e-4b05-ae25-819a5aad12b7" containerID="e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749" exitCode=2 Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.955415 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p72qt" event={"ID":"25775e1a-346e-4b05-ae25-819a5aad12b7","Type":"ContainerDied","Data":"e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749"} Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.955462 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p72qt" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.955496 4858 scope.go:117] "RemoveContainer" containerID="e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.955476 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p72qt" event={"ID":"25775e1a-346e-4b05-ae25-819a5aad12b7","Type":"ContainerDied","Data":"02c9e4062d6e347f44ea1fe99a7fa2dededa9904847ab253e0e01fe1e0d66341"} Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.977938 4858 scope.go:117] "RemoveContainer" containerID="e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749" Jan 27 20:23:22 crc kubenswrapper[4858]: E0127 20:23:22.978646 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749\": container with ID starting with e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749 not found: ID does not exist" containerID="e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.978716 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749"} err="failed to get container status \"e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749\": rpc error: code = NotFound desc = could not find container \"e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749\": container with ID starting with e5b659a7489e2d82fafdb4ebff10ba0619c4ccb16ab454f8bb6889bf9c443749 not found: ID does not exist" Jan 27 20:23:22 crc kubenswrapper[4858]: I0127 20:23:22.994597 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p72qt"] Jan 27 20:23:23 crc kubenswrapper[4858]: I0127 20:23:23.003381 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-p72qt"] Jan 27 20:23:23 crc kubenswrapper[4858]: I0127 20:23:23.290474 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw"] Jan 27 20:23:23 crc kubenswrapper[4858]: I0127 20:23:23.965530 4858 generic.go:334] "Generic (PLEG): container finished" podID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerID="e80214f706945c36e286144dc0e88edd6688533fd81dc728d25b7db57dbc3d46" exitCode=0 Jan 27 20:23:23 crc kubenswrapper[4858]: I0127 20:23:23.965571 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" event={"ID":"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d","Type":"ContainerDied","Data":"e80214f706945c36e286144dc0e88edd6688533fd81dc728d25b7db57dbc3d46"} Jan 27 20:23:23 crc kubenswrapper[4858]: I0127 20:23:23.965618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" event={"ID":"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d","Type":"ContainerStarted","Data":"63afa2e0fe9b62087bfd034405d0eac536491757d43e892669d35c96c4a35caa"} Jan 27 20:23:24 crc kubenswrapper[4858]: I0127 20:23:24.080333 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25775e1a-346e-4b05-ae25-819a5aad12b7" path="/var/lib/kubelet/pods/25775e1a-346e-4b05-ae25-819a5aad12b7/volumes" Jan 27 20:23:25 crc kubenswrapper[4858]: I0127 20:23:25.984448 4858 generic.go:334] "Generic (PLEG): container finished" podID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerID="aaa35b486cd82127154f1558bf72d83633f603fa67703624a9511f95067e6baa" exitCode=0 Jan 27 20:23:25 crc kubenswrapper[4858]: I0127 20:23:25.984531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" event={"ID":"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d","Type":"ContainerDied","Data":"aaa35b486cd82127154f1558bf72d83633f603fa67703624a9511f95067e6baa"} Jan 27 20:23:26 crc kubenswrapper[4858]: I0127 20:23:26.998417 4858 generic.go:334] "Generic (PLEG): container finished" podID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerID="8f4eede3becd8521d238fbf52069ba46584809326298811a4406e66bc633734c" exitCode=0 Jan 27 20:23:26 crc kubenswrapper[4858]: I0127 20:23:26.998510 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" event={"ID":"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d","Type":"ContainerDied","Data":"8f4eede3becd8521d238fbf52069ba46584809326298811a4406e66bc633734c"} Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.268541 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.442948 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-util\") pod \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.443215 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-bundle\") pod \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.443363 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7pbm\" (UniqueName: \"kubernetes.io/projected/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-kube-api-access-x7pbm\") pod \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\" (UID: \"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d\") " Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.444564 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-bundle" (OuterVolumeSpecName: "bundle") pod "86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" (UID: "86bd2beb-9d03-402c-bb7a-0ee191fa9f8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.452796 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-kube-api-access-x7pbm" (OuterVolumeSpecName: "kube-api-access-x7pbm") pod "86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" (UID: "86bd2beb-9d03-402c-bb7a-0ee191fa9f8d"). InnerVolumeSpecName "kube-api-access-x7pbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.456984 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-util" (OuterVolumeSpecName: "util") pod "86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" (UID: "86bd2beb-9d03-402c-bb7a-0ee191fa9f8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.544926 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7pbm\" (UniqueName: \"kubernetes.io/projected/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-kube-api-access-x7pbm\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.545191 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-util\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:28 crc kubenswrapper[4858]: I0127 20:23:28.545202 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd2beb-9d03-402c-bb7a-0ee191fa9f8d-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.015266 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" event={"ID":"86bd2beb-9d03-402c-bb7a-0ee191fa9f8d","Type":"ContainerDied","Data":"63afa2e0fe9b62087bfd034405d0eac536491757d43e892669d35c96c4a35caa"} Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.015358 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63afa2e0fe9b62087bfd034405d0eac536491757d43e892669d35c96c4a35caa" Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.015372 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw" Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.338403 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.338507 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.338603 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.339438 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"134de6cefdf9618660f3288534217e176eacedd779a7557a8425c203f6c864ec"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:23:29 crc kubenswrapper[4858]: I0127 20:23:29.339526 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://134de6cefdf9618660f3288534217e176eacedd779a7557a8425c203f6c864ec" gracePeriod=600 Jan 27 20:23:30 crc kubenswrapper[4858]: I0127 20:23:30.023183 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="134de6cefdf9618660f3288534217e176eacedd779a7557a8425c203f6c864ec" exitCode=0 Jan 27 20:23:30 crc kubenswrapper[4858]: I0127 20:23:30.023265 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"134de6cefdf9618660f3288534217e176eacedd779a7557a8425c203f6c864ec"} Jan 27 20:23:30 crc kubenswrapper[4858]: I0127 20:23:30.023449 4858 scope.go:117] "RemoveContainer" containerID="86373b100213bc4355e865928b2a5437a78a9df277502a47946cf0f5767d7dde" Jan 27 20:23:31 crc kubenswrapper[4858]: I0127 20:23:31.034343 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"955bc619bd742d004863858dd5a8f86f78a2f164e013b906e4efa16975027e52"} Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.560754 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn"] Jan 27 20:23:42 crc kubenswrapper[4858]: E0127 20:23:42.561670 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerName="pull" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.561687 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerName="pull" Jan 27 20:23:42 crc kubenswrapper[4858]: E0127 20:23:42.561701 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerName="util" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.561709 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerName="util" Jan 27 20:23:42 crc kubenswrapper[4858]: E0127 20:23:42.561729 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerName="extract" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.561737 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerName="extract" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.561848 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bd2beb-9d03-402c-bb7a-0ee191fa9f8d" containerName="extract" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.562409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.564597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.564753 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nj6bh" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.565490 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.574444 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn"] Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.575139 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.579869 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.737100 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77b99589-63b5-4df6-b9b7-fc5335eb3463-apiservice-cert\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.737613 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfddg\" (UniqueName: \"kubernetes.io/projected/77b99589-63b5-4df6-b9b7-fc5335eb3463-kube-api-access-xfddg\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.737646 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77b99589-63b5-4df6-b9b7-fc5335eb3463-webhook-cert\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.800738 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt"] Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.801524 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.808361 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.808361 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-f9xkz" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.808754 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.821895 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt"] Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.857444 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfddg\" (UniqueName: \"kubernetes.io/projected/77b99589-63b5-4df6-b9b7-fc5335eb3463-kube-api-access-xfddg\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.857507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77b99589-63b5-4df6-b9b7-fc5335eb3463-webhook-cert\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.857701 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77b99589-63b5-4df6-b9b7-fc5335eb3463-apiservice-cert\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.867456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/77b99589-63b5-4df6-b9b7-fc5335eb3463-webhook-cert\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.879288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/77b99589-63b5-4df6-b9b7-fc5335eb3463-apiservice-cert\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.886953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfddg\" (UniqueName: \"kubernetes.io/projected/77b99589-63b5-4df6-b9b7-fc5335eb3463-kube-api-access-xfddg\") pod \"metallb-operator-controller-manager-5c967b4747-92zgn\" (UID: \"77b99589-63b5-4df6-b9b7-fc5335eb3463\") " pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.959307 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6s4\" (UniqueName: \"kubernetes.io/projected/079958dc-db6c-480e-90bd-1771c1c404b2-kube-api-access-kg6s4\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.959368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/079958dc-db6c-480e-90bd-1771c1c404b2-webhook-cert\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:42 crc kubenswrapper[4858]: I0127 20:23:42.959416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/079958dc-db6c-480e-90bd-1771c1c404b2-apiservice-cert\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.060209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg6s4\" (UniqueName: \"kubernetes.io/projected/079958dc-db6c-480e-90bd-1771c1c404b2-kube-api-access-kg6s4\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.060470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/079958dc-db6c-480e-90bd-1771c1c404b2-webhook-cert\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.060627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/079958dc-db6c-480e-90bd-1771c1c404b2-apiservice-cert\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.063830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/079958dc-db6c-480e-90bd-1771c1c404b2-apiservice-cert\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.065987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/079958dc-db6c-480e-90bd-1771c1c404b2-webhook-cert\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.083822 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg6s4\" (UniqueName: \"kubernetes.io/projected/079958dc-db6c-480e-90bd-1771c1c404b2-kube-api-access-kg6s4\") pod \"metallb-operator-webhook-server-5c6d8d9f7d-qxppt\" (UID: \"079958dc-db6c-480e-90bd-1771c1c404b2\") " pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.116515 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.184750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.357786 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt"] Jan 27 20:23:43 crc kubenswrapper[4858]: I0127 20:23:43.667487 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn"] Jan 27 20:23:44 crc kubenswrapper[4858]: I0127 20:23:44.116727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" event={"ID":"079958dc-db6c-480e-90bd-1771c1c404b2","Type":"ContainerStarted","Data":"6ce89485a6591ec784298e08b6cea23611ff26d798cb9f957150212dd17b4ceb"} Jan 27 20:23:44 crc kubenswrapper[4858]: I0127 20:23:44.119002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" event={"ID":"77b99589-63b5-4df6-b9b7-fc5335eb3463","Type":"ContainerStarted","Data":"42dc77d4a6fed70f35792dcd25a7a74877354e4d33cee7919c7396cd834dbd7b"} Jan 27 20:23:49 crc kubenswrapper[4858]: I0127 20:23:49.171833 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" event={"ID":"77b99589-63b5-4df6-b9b7-fc5335eb3463","Type":"ContainerStarted","Data":"f5c46365e140ee57e8225f0d82a6d4b900029225e130911026ebcacc018aae6b"} Jan 27 20:23:49 crc kubenswrapper[4858]: I0127 20:23:49.172381 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:23:49 crc kubenswrapper[4858]: I0127 20:23:49.173643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" event={"ID":"079958dc-db6c-480e-90bd-1771c1c404b2","Type":"ContainerStarted","Data":"02b9b2bcb9268f5c1c4ee6be531ebdaed73762851570f81cec24fa6dc8404b96"} Jan 27 20:23:49 crc kubenswrapper[4858]: I0127 20:23:49.173767 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:23:49 crc kubenswrapper[4858]: I0127 20:23:49.197703 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" podStartSLOduration=2.455319756 podStartE2EDuration="7.197684774s" podCreationTimestamp="2026-01-27 20:23:42 +0000 UTC" firstStartedPulling="2026-01-27 20:23:43.690703768 +0000 UTC m=+968.398519474" lastFinishedPulling="2026-01-27 20:23:48.433068786 +0000 UTC m=+973.140884492" observedRunningTime="2026-01-27 20:23:49.193121926 +0000 UTC m=+973.900937652" watchObservedRunningTime="2026-01-27 20:23:49.197684774 +0000 UTC m=+973.905500480" Jan 27 20:23:49 crc kubenswrapper[4858]: I0127 20:23:49.216980 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" podStartSLOduration=2.142466367 podStartE2EDuration="7.216960543s" podCreationTimestamp="2026-01-27 20:23:42 +0000 UTC" firstStartedPulling="2026-01-27 20:23:43.379790384 +0000 UTC m=+968.087606090" lastFinishedPulling="2026-01-27 20:23:48.45428456 +0000 UTC m=+973.162100266" observedRunningTime="2026-01-27 20:23:49.214035872 +0000 UTC m=+973.921851598" watchObservedRunningTime="2026-01-27 20:23:49.216960543 +0000 UTC m=+973.924776249" Jan 27 20:24:03 crc kubenswrapper[4858]: I0127 20:24:03.121370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5c6d8d9f7d-qxppt" Jan 27 20:24:23 crc kubenswrapper[4858]: I0127 20:24:23.189359 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c967b4747-92zgn" Jan 27 20:24:23 crc kubenswrapper[4858]: I0127 20:24:23.987058 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zr8cd"] Jan 27 20:24:23 crc kubenswrapper[4858]: I0127 20:24:23.989514 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:23 crc kubenswrapper[4858]: I0127 20:24:23.993331 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-4ftdv" Jan 27 20:24:23 crc kubenswrapper[4858]: I0127 20:24:23.993331 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.007362 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.010987 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8"] Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.014172 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.023952 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.025135 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8"] Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-frr-conf\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wt4v\" (UniqueName: \"kubernetes.io/projected/69b6591f-5854-4205-8af5-da752f5006ab-kube-api-access-6wt4v\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-metrics\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-frr-sockets\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084700 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69b6591f-5854-4205-8af5-da752f5006ab-frr-startup\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b6591f-5854-4205-8af5-da752f5006ab-metrics-certs\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgtw\" (UniqueName: \"kubernetes.io/projected/131f384a-33a7-421b-be46-51d5561a6e98-kube-api-access-jsgtw\") pod \"frr-k8s-webhook-server-7df86c4f6c-k8wd8\" (UID: \"131f384a-33a7-421b-be46-51d5561a6e98\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/131f384a-33a7-421b-be46-51d5561a6e98-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k8wd8\" (UID: \"131f384a-33a7-421b-be46-51d5561a6e98\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.084848 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-reloader\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.114184 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-hw6th"] Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.115944 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.118163 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.118567 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.118918 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4bzr4" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.119258 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.128175 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-flgg7"] Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.129626 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.132158 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.141356 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-flgg7"] Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.185790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.185850 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnj4\" (UniqueName: \"kubernetes.io/projected/12b83edf-4de2-4aa1-8dcd-147782a08fd4-kube-api-access-rlnj4\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.185879 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metrics-certs\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.185908 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-frr-conf\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wt4v\" (UniqueName: \"kubernetes.io/projected/69b6591f-5854-4205-8af5-da752f5006ab-kube-api-access-6wt4v\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12b83edf-4de2-4aa1-8dcd-147782a08fd4-metrics-certs\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186186 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbvm\" (UniqueName: \"kubernetes.io/projected/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-kube-api-access-4vbvm\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-metrics\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186255 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69b6591f-5854-4205-8af5-da752f5006ab-frr-startup\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-frr-sockets\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metallb-excludel2\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b6591f-5854-4205-8af5-da752f5006ab-metrics-certs\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186371 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsgtw\" (UniqueName: \"kubernetes.io/projected/131f384a-33a7-421b-be46-51d5561a6e98-kube-api-access-jsgtw\") pod \"frr-k8s-webhook-server-7df86c4f6c-k8wd8\" (UID: \"131f384a-33a7-421b-be46-51d5561a6e98\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186410 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/131f384a-33a7-421b-be46-51d5561a6e98-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k8wd8\" (UID: \"131f384a-33a7-421b-be46-51d5561a6e98\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186441 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12b83edf-4de2-4aa1-8dcd-147782a08fd4-cert\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-reloader\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.186690 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-frr-conf\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.187205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-metrics\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.187218 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-reloader\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.187480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/69b6591f-5854-4205-8af5-da752f5006ab-frr-sockets\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.189668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/69b6591f-5854-4205-8af5-da752f5006ab-frr-startup\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.199680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/131f384a-33a7-421b-be46-51d5561a6e98-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k8wd8\" (UID: \"131f384a-33a7-421b-be46-51d5561a6e98\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.205028 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b6591f-5854-4205-8af5-da752f5006ab-metrics-certs\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.207963 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsgtw\" (UniqueName: \"kubernetes.io/projected/131f384a-33a7-421b-be46-51d5561a6e98-kube-api-access-jsgtw\") pod \"frr-k8s-webhook-server-7df86c4f6c-k8wd8\" (UID: \"131f384a-33a7-421b-be46-51d5561a6e98\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.209032 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wt4v\" (UniqueName: \"kubernetes.io/projected/69b6591f-5854-4205-8af5-da752f5006ab-kube-api-access-6wt4v\") pod \"frr-k8s-zr8cd\" (UID: \"69b6591f-5854-4205-8af5-da752f5006ab\") " pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.287293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12b83edf-4de2-4aa1-8dcd-147782a08fd4-metrics-certs\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.287340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vbvm\" (UniqueName: \"kubernetes.io/projected/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-kube-api-access-4vbvm\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.287364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metallb-excludel2\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.287400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12b83edf-4de2-4aa1-8dcd-147782a08fd4-cert\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.287426 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.287443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlnj4\" (UniqueName: \"kubernetes.io/projected/12b83edf-4de2-4aa1-8dcd-147782a08fd4-kube-api-access-rlnj4\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.287465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metrics-certs\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: E0127 20:24:24.287567 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 20:24:24 crc kubenswrapper[4858]: E0127 20:24:24.287647 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist podName:ca2b2ed3-9750-407d-b919-fd5c6e060e0b nodeName:}" failed. No retries permitted until 2026-01-27 20:24:24.787626229 +0000 UTC m=+1009.495441935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist") pod "speaker-hw6th" (UID: "ca2b2ed3-9750-407d-b919-fd5c6e060e0b") : secret "metallb-memberlist" not found Jan 27 20:24:24 crc kubenswrapper[4858]: E0127 20:24:24.287572 4858 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 20:24:24 crc kubenswrapper[4858]: E0127 20:24:24.287701 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metrics-certs podName:ca2b2ed3-9750-407d-b919-fd5c6e060e0b nodeName:}" failed. No retries permitted until 2026-01-27 20:24:24.787690531 +0000 UTC m=+1009.495506237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metrics-certs") pod "speaker-hw6th" (UID: "ca2b2ed3-9750-407d-b919-fd5c6e060e0b") : secret "speaker-certs-secret" not found Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.288315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metallb-excludel2\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.292180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12b83edf-4de2-4aa1-8dcd-147782a08fd4-metrics-certs\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.292285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12b83edf-4de2-4aa1-8dcd-147782a08fd4-cert\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.313336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.327569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlnj4\" (UniqueName: \"kubernetes.io/projected/12b83edf-4de2-4aa1-8dcd-147782a08fd4-kube-api-access-rlnj4\") pod \"controller-6968d8fdc4-flgg7\" (UID: \"12b83edf-4de2-4aa1-8dcd-147782a08fd4\") " pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.332825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vbvm\" (UniqueName: \"kubernetes.io/projected/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-kube-api-access-4vbvm\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.332936 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.450707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.764392 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8"] Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.795708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.796117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metrics-certs\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: E0127 20:24:24.795974 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 20:24:24 crc kubenswrapper[4858]: E0127 20:24:24.796673 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist podName:ca2b2ed3-9750-407d-b919-fd5c6e060e0b nodeName:}" failed. No retries permitted until 2026-01-27 20:24:25.796652984 +0000 UTC m=+1010.504468690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist") pod "speaker-hw6th" (UID: "ca2b2ed3-9750-407d-b919-fd5c6e060e0b") : secret "metallb-memberlist" not found Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.802364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-metrics-certs\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:24 crc kubenswrapper[4858]: I0127 20:24:24.956101 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-flgg7"] Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.430350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" event={"ID":"131f384a-33a7-421b-be46-51d5561a6e98","Type":"ContainerStarted","Data":"a13fa5780f891444488da1c6bb0e97f5ef99d4184f3229280d6a975d5e59865c"} Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.432255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-flgg7" event={"ID":"12b83edf-4de2-4aa1-8dcd-147782a08fd4","Type":"ContainerStarted","Data":"5e8ab642aa1c314dd83152c8dcaf50bfa7afd7eeb2ee0d20f111e23b7a68c748"} Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.432291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-flgg7" event={"ID":"12b83edf-4de2-4aa1-8dcd-147782a08fd4","Type":"ContainerStarted","Data":"34781986e74249d83644d5b7f5326199d1c6b9d300d961c779c6bd2bed5aacf8"} Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.432307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-flgg7" event={"ID":"12b83edf-4de2-4aa1-8dcd-147782a08fd4","Type":"ContainerStarted","Data":"62bc07b0c873097ad9a329e78f77a829b26a6c3ad391b0c185e5dba3f2b1aa54"} Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.432355 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.433438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerStarted","Data":"01798032f38cd2bb7899ed0b994ea4498e0d468a2eb7f3b8ba79e7535b183d2d"} Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.455386 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-flgg7" podStartSLOduration=1.455364618 podStartE2EDuration="1.455364618s" podCreationTimestamp="2026-01-27 20:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:24:25.446085079 +0000 UTC m=+1010.153900795" watchObservedRunningTime="2026-01-27 20:24:25.455364618 +0000 UTC m=+1010.163180324" Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.810937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.816252 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca2b2ed3-9750-407d-b919-fd5c6e060e0b-memberlist\") pod \"speaker-hw6th\" (UID: \"ca2b2ed3-9750-407d-b919-fd5c6e060e0b\") " pod="metallb-system/speaker-hw6th" Jan 27 20:24:25 crc kubenswrapper[4858]: I0127 20:24:25.932455 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hw6th" Jan 27 20:24:26 crc kubenswrapper[4858]: I0127 20:24:26.459522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hw6th" event={"ID":"ca2b2ed3-9750-407d-b919-fd5c6e060e0b","Type":"ContainerStarted","Data":"65c6006ad4a49eda85cdb3fd0716bc3222ee0421d0d6f3e660c140e74a45b968"} Jan 27 20:24:26 crc kubenswrapper[4858]: I0127 20:24:26.459993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hw6th" event={"ID":"ca2b2ed3-9750-407d-b919-fd5c6e060e0b","Type":"ContainerStarted","Data":"9c1f2303312e3dc57558f8c7aca42ee447a6bbfdaf9a08d471be48b0e6d30ba5"} Jan 27 20:24:27 crc kubenswrapper[4858]: I0127 20:24:27.471699 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hw6th" event={"ID":"ca2b2ed3-9750-407d-b919-fd5c6e060e0b","Type":"ContainerStarted","Data":"f02206783fb918893c1c903958ac876312b55678a04467f216caa371f6239a53"} Jan 27 20:24:27 crc kubenswrapper[4858]: I0127 20:24:27.471849 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-hw6th" Jan 27 20:24:27 crc kubenswrapper[4858]: I0127 20:24:27.508347 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-hw6th" podStartSLOduration=3.508332373 podStartE2EDuration="3.508332373s" podCreationTimestamp="2026-01-27 20:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:24:27.50465093 +0000 UTC m=+1012.212466656" watchObservedRunningTime="2026-01-27 20:24:27.508332373 +0000 UTC m=+1012.216148069" Jan 27 20:24:28 crc kubenswrapper[4858]: I0127 20:24:28.759982 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4phnf"] Jan 27 20:24:28 crc kubenswrapper[4858]: I0127 20:24:28.762166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:28 crc kubenswrapper[4858]: I0127 20:24:28.780662 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4phnf"] Jan 27 20:24:28 crc kubenswrapper[4858]: I0127 20:24:28.954045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-catalog-content\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:28 crc kubenswrapper[4858]: I0127 20:24:28.954147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs5ct\" (UniqueName: \"kubernetes.io/projected/838da25d-1656-4908-905a-27046adfbc55-kube-api-access-xs5ct\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:28 crc kubenswrapper[4858]: I0127 20:24:28.954181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-utilities\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.055020 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-catalog-content\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.055143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs5ct\" (UniqueName: \"kubernetes.io/projected/838da25d-1656-4908-905a-27046adfbc55-kube-api-access-xs5ct\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.055180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-utilities\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.055861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-utilities\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.055913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-catalog-content\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.079356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs5ct\" (UniqueName: \"kubernetes.io/projected/838da25d-1656-4908-905a-27046adfbc55-kube-api-access-xs5ct\") pod \"redhat-marketplace-4phnf\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.079679 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:29 crc kubenswrapper[4858]: I0127 20:24:29.641940 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4phnf"] Jan 27 20:24:30 crc kubenswrapper[4858]: I0127 20:24:30.500685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4phnf" event={"ID":"838da25d-1656-4908-905a-27046adfbc55","Type":"ContainerStarted","Data":"2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365"} Jan 27 20:24:30 crc kubenswrapper[4858]: I0127 20:24:30.501010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4phnf" event={"ID":"838da25d-1656-4908-905a-27046adfbc55","Type":"ContainerStarted","Data":"ecceb828c5d50c0aa6741592857c73ef4acde4e6f2f5df4b5a9ee461030d8172"} Jan 27 20:24:31 crc kubenswrapper[4858]: I0127 20:24:31.511148 4858 generic.go:334] "Generic (PLEG): container finished" podID="838da25d-1656-4908-905a-27046adfbc55" containerID="2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365" exitCode=0 Jan 27 20:24:31 crc kubenswrapper[4858]: I0127 20:24:31.512180 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4phnf" event={"ID":"838da25d-1656-4908-905a-27046adfbc55","Type":"ContainerDied","Data":"2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365"} Jan 27 20:24:34 crc kubenswrapper[4858]: I0127 20:24:34.537748 4858 generic.go:334] "Generic (PLEG): container finished" podID="838da25d-1656-4908-905a-27046adfbc55" containerID="e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678" exitCode=0 Jan 27 20:24:34 crc kubenswrapper[4858]: I0127 20:24:34.537855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4phnf" event={"ID":"838da25d-1656-4908-905a-27046adfbc55","Type":"ContainerDied","Data":"e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678"} Jan 27 20:24:34 crc kubenswrapper[4858]: I0127 20:24:34.542076 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" event={"ID":"131f384a-33a7-421b-be46-51d5561a6e98","Type":"ContainerStarted","Data":"6b9326752301561dab2deed5e51a1b54b204dc0f68a6201a513d583f9e4c2a3b"} Jan 27 20:24:34 crc kubenswrapper[4858]: I0127 20:24:34.544044 4858 generic.go:334] "Generic (PLEG): container finished" podID="69b6591f-5854-4205-8af5-da752f5006ab" containerID="d517c5620514e1708bccf3be263630b94e3ab43671ccdb102cfd0574dbe6601d" exitCode=0 Jan 27 20:24:34 crc kubenswrapper[4858]: I0127 20:24:34.544090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerDied","Data":"d517c5620514e1708bccf3be263630b94e3ab43671ccdb102cfd0574dbe6601d"} Jan 27 20:24:34 crc kubenswrapper[4858]: I0127 20:24:34.607468 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" podStartSLOduration=2.888644395 podStartE2EDuration="11.607452059s" podCreationTimestamp="2026-01-27 20:24:23 +0000 UTC" firstStartedPulling="2026-01-27 20:24:24.772784866 +0000 UTC m=+1009.480600572" lastFinishedPulling="2026-01-27 20:24:33.49159253 +0000 UTC m=+1018.199408236" observedRunningTime="2026-01-27 20:24:34.602655435 +0000 UTC m=+1019.310471161" watchObservedRunningTime="2026-01-27 20:24:34.607452059 +0000 UTC m=+1019.315267765" Jan 27 20:24:35 crc kubenswrapper[4858]: I0127 20:24:35.554407 4858 generic.go:334] "Generic (PLEG): container finished" podID="69b6591f-5854-4205-8af5-da752f5006ab" containerID="8161a3062a8f9225174d017d625795148b3e26cb8ea8b5e60e85ea2ef698da42" exitCode=0 Jan 27 20:24:35 crc kubenswrapper[4858]: I0127 20:24:35.554495 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerDied","Data":"8161a3062a8f9225174d017d625795148b3e26cb8ea8b5e60e85ea2ef698da42"} Jan 27 20:24:35 crc kubenswrapper[4858]: I0127 20:24:35.559320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4phnf" event={"ID":"838da25d-1656-4908-905a-27046adfbc55","Type":"ContainerStarted","Data":"1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77"} Jan 27 20:24:35 crc kubenswrapper[4858]: I0127 20:24:35.559674 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:35 crc kubenswrapper[4858]: I0127 20:24:35.610538 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4phnf" podStartSLOduration=6.059907375 podStartE2EDuration="7.61050721s" podCreationTimestamp="2026-01-27 20:24:28 +0000 UTC" firstStartedPulling="2026-01-27 20:24:33.392346333 +0000 UTC m=+1018.100162039" lastFinishedPulling="2026-01-27 20:24:34.942946128 +0000 UTC m=+1019.650761874" observedRunningTime="2026-01-27 20:24:35.606689543 +0000 UTC m=+1020.314505289" watchObservedRunningTime="2026-01-27 20:24:35.61050721 +0000 UTC m=+1020.318322926" Jan 27 20:24:36 crc kubenswrapper[4858]: I0127 20:24:36.568494 4858 generic.go:334] "Generic (PLEG): container finished" podID="69b6591f-5854-4205-8af5-da752f5006ab" containerID="01934cc928b7ab4b800b3c141943f6bcc7a3596c52b23bd533a87268729178cd" exitCode=0 Jan 27 20:24:36 crc kubenswrapper[4858]: I0127 20:24:36.568581 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerDied","Data":"01934cc928b7ab4b800b3c141943f6bcc7a3596c52b23bd533a87268729178cd"} Jan 27 20:24:37 crc kubenswrapper[4858]: I0127 20:24:37.583898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerStarted","Data":"80373b9e084956787a8dfa45ea1518bacc82d4e5ca1e20f546826b1b46d341c2"} Jan 27 20:24:37 crc kubenswrapper[4858]: I0127 20:24:37.584272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerStarted","Data":"4d4f04d697315ebdaffd61e9b047b237719144065baaf1142e371322eaf293d3"} Jan 27 20:24:37 crc kubenswrapper[4858]: I0127 20:24:37.584284 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerStarted","Data":"64d0ec6333636a81e21f797bc14bcc4821defdcbd0c2ee19e768822db2dc3f7c"} Jan 27 20:24:37 crc kubenswrapper[4858]: I0127 20:24:37.584294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerStarted","Data":"9f57b65e00dcf3ef855465386820058432fcb181deccb322a7f5ffea9ccca885"} Jan 27 20:24:37 crc kubenswrapper[4858]: I0127 20:24:37.584303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerStarted","Data":"c24ff89bc2f60b17a31ba2a862055205c6f58a9af00c2a70f56aa3c207f30f42"} Jan 27 20:24:38 crc kubenswrapper[4858]: I0127 20:24:38.598045 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zr8cd" event={"ID":"69b6591f-5854-4205-8af5-da752f5006ab","Type":"ContainerStarted","Data":"0bbc9bef29bf53b86d0881c13715b00f1621da5dbfac6538ef5b5bcb51bd296e"} Jan 27 20:24:38 crc kubenswrapper[4858]: I0127 20:24:38.598388 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:38 crc kubenswrapper[4858]: I0127 20:24:38.637240 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zr8cd" podStartSLOduration=6.652964983 podStartE2EDuration="15.637219356s" podCreationTimestamp="2026-01-27 20:24:23 +0000 UTC" firstStartedPulling="2026-01-27 20:24:24.49101006 +0000 UTC m=+1009.198825766" lastFinishedPulling="2026-01-27 20:24:33.475264433 +0000 UTC m=+1018.183080139" observedRunningTime="2026-01-27 20:24:38.629949383 +0000 UTC m=+1023.337765109" watchObservedRunningTime="2026-01-27 20:24:38.637219356 +0000 UTC m=+1023.345035072" Jan 27 20:24:39 crc kubenswrapper[4858]: I0127 20:24:39.081054 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:39 crc kubenswrapper[4858]: I0127 20:24:39.081116 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:39 crc kubenswrapper[4858]: I0127 20:24:39.140122 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:39 crc kubenswrapper[4858]: I0127 20:24:39.314431 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:39 crc kubenswrapper[4858]: I0127 20:24:39.355116 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.222834 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bppj6"] Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.224375 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.242606 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bppj6"] Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.333380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-catalog-content\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.333434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plffg\" (UniqueName: \"kubernetes.io/projected/8ea1e817-c0c8-4f44-8c70-167358f102b6-kube-api-access-plffg\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.333651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-utilities\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.435284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-catalog-content\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.435333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plffg\" (UniqueName: \"kubernetes.io/projected/8ea1e817-c0c8-4f44-8c70-167358f102b6-kube-api-access-plffg\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.435352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-utilities\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.435930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-utilities\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.436001 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-catalog-content\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.463652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plffg\" (UniqueName: \"kubernetes.io/projected/8ea1e817-c0c8-4f44-8c70-167358f102b6-kube-api-access-plffg\") pod \"certified-operators-bppj6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:40 crc kubenswrapper[4858]: I0127 20:24:40.592828 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:41 crc kubenswrapper[4858]: I0127 20:24:41.054568 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bppj6"] Jan 27 20:24:41 crc kubenswrapper[4858]: W0127 20:24:41.060724 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ea1e817_c0c8_4f44_8c70_167358f102b6.slice/crio-38e38502c4186875d236a53546d9f295cb7215c4b8ad58ce2ac0d0e80790dece WatchSource:0}: Error finding container 38e38502c4186875d236a53546d9f295cb7215c4b8ad58ce2ac0d0e80790dece: Status 404 returned error can't find the container with id 38e38502c4186875d236a53546d9f295cb7215c4b8ad58ce2ac0d0e80790dece Jan 27 20:24:41 crc kubenswrapper[4858]: I0127 20:24:41.630063 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerID="6674b88908ea5dccfe8b0858b179cba80d4b5bd66d56bc6b52f4be956b63b032" exitCode=0 Jan 27 20:24:41 crc kubenswrapper[4858]: I0127 20:24:41.630124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bppj6" event={"ID":"8ea1e817-c0c8-4f44-8c70-167358f102b6","Type":"ContainerDied","Data":"6674b88908ea5dccfe8b0858b179cba80d4b5bd66d56bc6b52f4be956b63b032"} Jan 27 20:24:41 crc kubenswrapper[4858]: I0127 20:24:41.630160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bppj6" event={"ID":"8ea1e817-c0c8-4f44-8c70-167358f102b6","Type":"ContainerStarted","Data":"38e38502c4186875d236a53546d9f295cb7215c4b8ad58ce2ac0d0e80790dece"} Jan 27 20:24:43 crc kubenswrapper[4858]: I0127 20:24:43.656131 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerID="f9044f20f520269fe50cb9f672b8af3d8d9cc0e94e23194e8143e8b542030490" exitCode=0 Jan 27 20:24:43 crc kubenswrapper[4858]: I0127 20:24:43.656254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bppj6" event={"ID":"8ea1e817-c0c8-4f44-8c70-167358f102b6","Type":"ContainerDied","Data":"f9044f20f520269fe50cb9f672b8af3d8d9cc0e94e23194e8143e8b542030490"} Jan 27 20:24:44 crc kubenswrapper[4858]: I0127 20:24:44.343088 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k8wd8" Jan 27 20:24:44 crc kubenswrapper[4858]: I0127 20:24:44.457355 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-flgg7" Jan 27 20:24:44 crc kubenswrapper[4858]: I0127 20:24:44.668420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bppj6" event={"ID":"8ea1e817-c0c8-4f44-8c70-167358f102b6","Type":"ContainerStarted","Data":"dcae2cde0132041a00c1e2d36a314d4b07f404e5155755b91bbcd16ca790d8ed"} Jan 27 20:24:44 crc kubenswrapper[4858]: I0127 20:24:44.688414 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bppj6" podStartSLOduration=1.99510131 podStartE2EDuration="4.688393813s" podCreationTimestamp="2026-01-27 20:24:40 +0000 UTC" firstStartedPulling="2026-01-27 20:24:41.6320921 +0000 UTC m=+1026.339907816" lastFinishedPulling="2026-01-27 20:24:44.325384613 +0000 UTC m=+1029.033200319" observedRunningTime="2026-01-27 20:24:44.688263899 +0000 UTC m=+1029.396079615" watchObservedRunningTime="2026-01-27 20:24:44.688393813 +0000 UTC m=+1029.396209519" Jan 27 20:24:45 crc kubenswrapper[4858]: I0127 20:24:45.939629 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-hw6th" Jan 27 20:24:48 crc kubenswrapper[4858]: I0127 20:24:48.954113 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rnkrn"] Jan 27 20:24:48 crc kubenswrapper[4858]: I0127 20:24:48.955476 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rnkrn" Jan 27 20:24:48 crc kubenswrapper[4858]: I0127 20:24:48.958081 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 20:24:48 crc kubenswrapper[4858]: I0127 20:24:48.958265 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jtknh" Jan 27 20:24:48 crc kubenswrapper[4858]: I0127 20:24:48.958527 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 20:24:48 crc kubenswrapper[4858]: I0127 20:24:48.971680 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rnkrn"] Jan 27 20:24:49 crc kubenswrapper[4858]: I0127 20:24:49.068890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvcrx\" (UniqueName: \"kubernetes.io/projected/0e59fac2-8eb9-44c9-a2ef-8e964a17e281-kube-api-access-dvcrx\") pod \"openstack-operator-index-rnkrn\" (UID: \"0e59fac2-8eb9-44c9-a2ef-8e964a17e281\") " pod="openstack-operators/openstack-operator-index-rnkrn" Jan 27 20:24:49 crc kubenswrapper[4858]: I0127 20:24:49.123479 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:49 crc kubenswrapper[4858]: I0127 20:24:49.170702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvcrx\" (UniqueName: \"kubernetes.io/projected/0e59fac2-8eb9-44c9-a2ef-8e964a17e281-kube-api-access-dvcrx\") pod \"openstack-operator-index-rnkrn\" (UID: \"0e59fac2-8eb9-44c9-a2ef-8e964a17e281\") " pod="openstack-operators/openstack-operator-index-rnkrn" Jan 27 20:24:49 crc kubenswrapper[4858]: I0127 20:24:49.191110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvcrx\" (UniqueName: \"kubernetes.io/projected/0e59fac2-8eb9-44c9-a2ef-8e964a17e281-kube-api-access-dvcrx\") pod \"openstack-operator-index-rnkrn\" (UID: \"0e59fac2-8eb9-44c9-a2ef-8e964a17e281\") " pod="openstack-operators/openstack-operator-index-rnkrn" Jan 27 20:24:49 crc kubenswrapper[4858]: I0127 20:24:49.272939 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rnkrn" Jan 27 20:24:49 crc kubenswrapper[4858]: I0127 20:24:49.673037 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rnkrn"] Jan 27 20:24:49 crc kubenswrapper[4858]: W0127 20:24:49.676996 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e59fac2_8eb9_44c9_a2ef_8e964a17e281.slice/crio-9c7bac1eb644a885de53b1a775623453973fbd12cee3097456e4d2f91d0bf119 WatchSource:0}: Error finding container 9c7bac1eb644a885de53b1a775623453973fbd12cee3097456e4d2f91d0bf119: Status 404 returned error can't find the container with id 9c7bac1eb644a885de53b1a775623453973fbd12cee3097456e4d2f91d0bf119 Jan 27 20:24:49 crc kubenswrapper[4858]: I0127 20:24:49.708771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rnkrn" event={"ID":"0e59fac2-8eb9-44c9-a2ef-8e964a17e281","Type":"ContainerStarted","Data":"9c7bac1eb644a885de53b1a775623453973fbd12cee3097456e4d2f91d0bf119"} Jan 27 20:24:50 crc kubenswrapper[4858]: I0127 20:24:50.593787 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:50 crc kubenswrapper[4858]: I0127 20:24:50.594042 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:50 crc kubenswrapper[4858]: I0127 20:24:50.634194 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:50 crc kubenswrapper[4858]: I0127 20:24:50.755417 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:24:53 crc kubenswrapper[4858]: I0127 20:24:53.329981 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rnkrn"] Jan 27 20:24:53 crc kubenswrapper[4858]: I0127 20:24:53.744347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rnkrn" event={"ID":"0e59fac2-8eb9-44c9-a2ef-8e964a17e281","Type":"ContainerStarted","Data":"1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28"} Jan 27 20:24:53 crc kubenswrapper[4858]: I0127 20:24:53.766960 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rnkrn" podStartSLOduration=2.842497019 podStartE2EDuration="5.766921462s" podCreationTimestamp="2026-01-27 20:24:48 +0000 UTC" firstStartedPulling="2026-01-27 20:24:49.679881293 +0000 UTC m=+1034.387696999" lastFinishedPulling="2026-01-27 20:24:52.604305736 +0000 UTC m=+1037.312121442" observedRunningTime="2026-01-27 20:24:53.761420949 +0000 UTC m=+1038.469236675" watchObservedRunningTime="2026-01-27 20:24:53.766921462 +0000 UTC m=+1038.474737178" Jan 27 20:24:53 crc kubenswrapper[4858]: I0127 20:24:53.934850 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4phnf"] Jan 27 20:24:53 crc kubenswrapper[4858]: I0127 20:24:53.935150 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4phnf" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="registry-server" containerID="cri-o://1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77" gracePeriod=2 Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.189758 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-w7wsb"] Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.190840 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.197664 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w7wsb"] Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.318882 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zr8cd" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.360933 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjv6\" (UniqueName: \"kubernetes.io/projected/ba4e832d-ff36-45e9-90b9-44e125906dba-kube-api-access-bgjv6\") pod \"openstack-operator-index-w7wsb\" (UID: \"ba4e832d-ff36-45e9-90b9-44e125906dba\") " pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.400411 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.462812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjv6\" (UniqueName: \"kubernetes.io/projected/ba4e832d-ff36-45e9-90b9-44e125906dba-kube-api-access-bgjv6\") pod \"openstack-operator-index-w7wsb\" (UID: \"ba4e832d-ff36-45e9-90b9-44e125906dba\") " pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.489712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjv6\" (UniqueName: \"kubernetes.io/projected/ba4e832d-ff36-45e9-90b9-44e125906dba-kube-api-access-bgjv6\") pod \"openstack-operator-index-w7wsb\" (UID: \"ba4e832d-ff36-45e9-90b9-44e125906dba\") " pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.516115 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.563752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-utilities\") pod \"838da25d-1656-4908-905a-27046adfbc55\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.564015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs5ct\" (UniqueName: \"kubernetes.io/projected/838da25d-1656-4908-905a-27046adfbc55-kube-api-access-xs5ct\") pod \"838da25d-1656-4908-905a-27046adfbc55\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.564063 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-catalog-content\") pod \"838da25d-1656-4908-905a-27046adfbc55\" (UID: \"838da25d-1656-4908-905a-27046adfbc55\") " Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.566987 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-utilities" (OuterVolumeSpecName: "utilities") pod "838da25d-1656-4908-905a-27046adfbc55" (UID: "838da25d-1656-4908-905a-27046adfbc55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.568217 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838da25d-1656-4908-905a-27046adfbc55-kube-api-access-xs5ct" (OuterVolumeSpecName: "kube-api-access-xs5ct") pod "838da25d-1656-4908-905a-27046adfbc55" (UID: "838da25d-1656-4908-905a-27046adfbc55"). InnerVolumeSpecName "kube-api-access-xs5ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.589328 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "838da25d-1656-4908-905a-27046adfbc55" (UID: "838da25d-1656-4908-905a-27046adfbc55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.665595 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs5ct\" (UniqueName: \"kubernetes.io/projected/838da25d-1656-4908-905a-27046adfbc55-kube-api-access-xs5ct\") on node \"crc\" DevicePath \"\"" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.665637 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.665650 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838da25d-1656-4908-905a-27046adfbc55-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.767633 4858 generic.go:334] "Generic (PLEG): container finished" podID="838da25d-1656-4908-905a-27046adfbc55" containerID="1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77" exitCode=0 Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.767738 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4phnf" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.767740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4phnf" event={"ID":"838da25d-1656-4908-905a-27046adfbc55","Type":"ContainerDied","Data":"1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77"} Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.768318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4phnf" event={"ID":"838da25d-1656-4908-905a-27046adfbc55","Type":"ContainerDied","Data":"ecceb828c5d50c0aa6741592857c73ef4acde4e6f2f5df4b5a9ee461030d8172"} Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.768370 4858 scope.go:117] "RemoveContainer" containerID="1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.768877 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rnkrn" podUID="0e59fac2-8eb9-44c9-a2ef-8e964a17e281" containerName="registry-server" containerID="cri-o://1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28" gracePeriod=2 Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.794864 4858 scope.go:117] "RemoveContainer" containerID="e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.855767 4858 scope.go:117] "RemoveContainer" containerID="2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.859536 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4phnf"] Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.864505 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4phnf"] Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.872669 4858 scope.go:117] "RemoveContainer" containerID="1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77" Jan 27 20:24:54 crc kubenswrapper[4858]: E0127 20:24:54.873267 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77\": container with ID starting with 1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77 not found: ID does not exist" containerID="1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.873310 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77"} err="failed to get container status \"1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77\": rpc error: code = NotFound desc = could not find container \"1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77\": container with ID starting with 1cdf1f836124deb2d29042e1d34aef34259ac238d18aaafbb3a699b2a3e53b77 not found: ID does not exist" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.873338 4858 scope.go:117] "RemoveContainer" containerID="e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678" Jan 27 20:24:54 crc kubenswrapper[4858]: E0127 20:24:54.873656 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678\": container with ID starting with e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678 not found: ID does not exist" containerID="e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.873682 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678"} err="failed to get container status \"e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678\": rpc error: code = NotFound desc = could not find container \"e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678\": container with ID starting with e945e1f0b00d7e0ec1d0f7143d52e32a580a8e650fc9e6ea3645af9af4d0b678 not found: ID does not exist" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.873698 4858 scope.go:117] "RemoveContainer" containerID="2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365" Jan 27 20:24:54 crc kubenswrapper[4858]: E0127 20:24:54.874206 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365\": container with ID starting with 2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365 not found: ID does not exist" containerID="2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.874231 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365"} err="failed to get container status \"2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365\": rpc error: code = NotFound desc = could not find container \"2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365\": container with ID starting with 2d8b7e9bab80e15807e2170cf0d0ee1024a79d93472c0def5d94daad5a4b1365 not found: ID does not exist" Jan 27 20:24:54 crc kubenswrapper[4858]: I0127 20:24:54.961845 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w7wsb"] Jan 27 20:24:54 crc kubenswrapper[4858]: W0127 20:24:54.967851 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba4e832d_ff36_45e9_90b9_44e125906dba.slice/crio-fe9a3e66dc8d069771fb257619dce77c8cec8fd3f3068f2f11aa3bfda0fe84d0 WatchSource:0}: Error finding container fe9a3e66dc8d069771fb257619dce77c8cec8fd3f3068f2f11aa3bfda0fe84d0: Status 404 returned error can't find the container with id fe9a3e66dc8d069771fb257619dce77c8cec8fd3f3068f2f11aa3bfda0fe84d0 Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.210672 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rnkrn" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.378079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvcrx\" (UniqueName: \"kubernetes.io/projected/0e59fac2-8eb9-44c9-a2ef-8e964a17e281-kube-api-access-dvcrx\") pod \"0e59fac2-8eb9-44c9-a2ef-8e964a17e281\" (UID: \"0e59fac2-8eb9-44c9-a2ef-8e964a17e281\") " Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.384694 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e59fac2-8eb9-44c9-a2ef-8e964a17e281-kube-api-access-dvcrx" (OuterVolumeSpecName: "kube-api-access-dvcrx") pod "0e59fac2-8eb9-44c9-a2ef-8e964a17e281" (UID: "0e59fac2-8eb9-44c9-a2ef-8e964a17e281"). InnerVolumeSpecName "kube-api-access-dvcrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.480753 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvcrx\" (UniqueName: \"kubernetes.io/projected/0e59fac2-8eb9-44c9-a2ef-8e964a17e281-kube-api-access-dvcrx\") on node \"crc\" DevicePath \"\"" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.776482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w7wsb" event={"ID":"ba4e832d-ff36-45e9-90b9-44e125906dba","Type":"ContainerStarted","Data":"07d92031b99811315cc3bc147b69eea8cab19f0869b640168844703a63dfd516"} Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.776526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w7wsb" event={"ID":"ba4e832d-ff36-45e9-90b9-44e125906dba","Type":"ContainerStarted","Data":"fe9a3e66dc8d069771fb257619dce77c8cec8fd3f3068f2f11aa3bfda0fe84d0"} Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.778023 4858 generic.go:334] "Generic (PLEG): container finished" podID="0e59fac2-8eb9-44c9-a2ef-8e964a17e281" containerID="1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28" exitCode=0 Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.778081 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rnkrn" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.778074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rnkrn" event={"ID":"0e59fac2-8eb9-44c9-a2ef-8e964a17e281","Type":"ContainerDied","Data":"1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28"} Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.778319 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rnkrn" event={"ID":"0e59fac2-8eb9-44c9-a2ef-8e964a17e281","Type":"ContainerDied","Data":"9c7bac1eb644a885de53b1a775623453973fbd12cee3097456e4d2f91d0bf119"} Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.778352 4858 scope.go:117] "RemoveContainer" containerID="1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.792838 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-w7wsb" podStartSLOduration=1.720371352 podStartE2EDuration="1.79282538s" podCreationTimestamp="2026-01-27 20:24:54 +0000 UTC" firstStartedPulling="2026-01-27 20:24:54.973323255 +0000 UTC m=+1039.681138971" lastFinishedPulling="2026-01-27 20:24:55.045777293 +0000 UTC m=+1039.753592999" observedRunningTime="2026-01-27 20:24:55.789198858 +0000 UTC m=+1040.497014564" watchObservedRunningTime="2026-01-27 20:24:55.79282538 +0000 UTC m=+1040.500641086" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.800615 4858 scope.go:117] "RemoveContainer" containerID="1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28" Jan 27 20:24:55 crc kubenswrapper[4858]: E0127 20:24:55.801192 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28\": container with ID starting with 1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28 not found: ID does not exist" containerID="1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.801242 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28"} err="failed to get container status \"1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28\": rpc error: code = NotFound desc = could not find container \"1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28\": container with ID starting with 1d65ad47d76c883f860adc9288908dd9eeea8532aa1ff1a99776123e4c2def28 not found: ID does not exist" Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.811261 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rnkrn"] Jan 27 20:24:55 crc kubenswrapper[4858]: I0127 20:24:55.815628 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rnkrn"] Jan 27 20:24:56 crc kubenswrapper[4858]: I0127 20:24:56.083433 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e59fac2-8eb9-44c9-a2ef-8e964a17e281" path="/var/lib/kubelet/pods/0e59fac2-8eb9-44c9-a2ef-8e964a17e281/volumes" Jan 27 20:24:56 crc kubenswrapper[4858]: I0127 20:24:56.084479 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838da25d-1656-4908-905a-27046adfbc55" path="/var/lib/kubelet/pods/838da25d-1656-4908-905a-27046adfbc55/volumes" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.148268 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xnjt5"] Jan 27 20:24:58 crc kubenswrapper[4858]: E0127 20:24:58.149943 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="registry-server" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.149966 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="registry-server" Jan 27 20:24:58 crc kubenswrapper[4858]: E0127 20:24:58.150017 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="extract-content" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.150027 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="extract-content" Jan 27 20:24:58 crc kubenswrapper[4858]: E0127 20:24:58.150057 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="extract-utilities" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.150069 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="extract-utilities" Jan 27 20:24:58 crc kubenswrapper[4858]: E0127 20:24:58.150080 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e59fac2-8eb9-44c9-a2ef-8e964a17e281" containerName="registry-server" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.150094 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e59fac2-8eb9-44c9-a2ef-8e964a17e281" containerName="registry-server" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.150576 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="838da25d-1656-4908-905a-27046adfbc55" containerName="registry-server" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.150615 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e59fac2-8eb9-44c9-a2ef-8e964a17e281" containerName="registry-server" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.154447 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.166537 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnjt5"] Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.220686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmslq\" (UniqueName: \"kubernetes.io/projected/a59f78bc-e181-4d2f-9415-32d82f902e9f-kube-api-access-lmslq\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.220761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-utilities\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.220806 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-catalog-content\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.322482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmslq\" (UniqueName: \"kubernetes.io/projected/a59f78bc-e181-4d2f-9415-32d82f902e9f-kube-api-access-lmslq\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.322588 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-utilities\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.322633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-catalog-content\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.323059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-catalog-content\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.323205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-utilities\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.350037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmslq\" (UniqueName: \"kubernetes.io/projected/a59f78bc-e181-4d2f-9415-32d82f902e9f-kube-api-access-lmslq\") pod \"community-operators-xnjt5\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:58 crc kubenswrapper[4858]: I0127 20:24:58.484039 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.004070 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnjt5"] Jan 27 20:24:59 crc kubenswrapper[4858]: W0127 20:24:59.013910 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda59f78bc_e181_4d2f_9415_32d82f902e9f.slice/crio-5e9096ea0a9c4307af7bd9b6980c3bd4eaa1e970387bc13e8a757f256550117b WatchSource:0}: Error finding container 5e9096ea0a9c4307af7bd9b6980c3bd4eaa1e970387bc13e8a757f256550117b: Status 404 returned error can't find the container with id 5e9096ea0a9c4307af7bd9b6980c3bd4eaa1e970387bc13e8a757f256550117b Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.531347 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bppj6"] Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.531969 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bppj6" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="registry-server" containerID="cri-o://dcae2cde0132041a00c1e2d36a314d4b07f404e5155755b91bbcd16ca790d8ed" gracePeriod=2 Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.813314 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerID="dcae2cde0132041a00c1e2d36a314d4b07f404e5155755b91bbcd16ca790d8ed" exitCode=0 Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.813382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bppj6" event={"ID":"8ea1e817-c0c8-4f44-8c70-167358f102b6","Type":"ContainerDied","Data":"dcae2cde0132041a00c1e2d36a314d4b07f404e5155755b91bbcd16ca790d8ed"} Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.815426 4858 generic.go:334] "Generic (PLEG): container finished" podID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerID="b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c" exitCode=0 Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.815456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnjt5" event={"ID":"a59f78bc-e181-4d2f-9415-32d82f902e9f","Type":"ContainerDied","Data":"b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c"} Jan 27 20:24:59 crc kubenswrapper[4858]: I0127 20:24:59.815481 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnjt5" event={"ID":"a59f78bc-e181-4d2f-9415-32d82f902e9f","Type":"ContainerStarted","Data":"5e9096ea0a9c4307af7bd9b6980c3bd4eaa1e970387bc13e8a757f256550117b"} Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.472096 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.661113 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-utilities\") pod \"8ea1e817-c0c8-4f44-8c70-167358f102b6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.661180 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plffg\" (UniqueName: \"kubernetes.io/projected/8ea1e817-c0c8-4f44-8c70-167358f102b6-kube-api-access-plffg\") pod \"8ea1e817-c0c8-4f44-8c70-167358f102b6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.661311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-catalog-content\") pod \"8ea1e817-c0c8-4f44-8c70-167358f102b6\" (UID: \"8ea1e817-c0c8-4f44-8c70-167358f102b6\") " Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.662161 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-utilities" (OuterVolumeSpecName: "utilities") pod "8ea1e817-c0c8-4f44-8c70-167358f102b6" (UID: "8ea1e817-c0c8-4f44-8c70-167358f102b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.671916 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea1e817-c0c8-4f44-8c70-167358f102b6-kube-api-access-plffg" (OuterVolumeSpecName: "kube-api-access-plffg") pod "8ea1e817-c0c8-4f44-8c70-167358f102b6" (UID: "8ea1e817-c0c8-4f44-8c70-167358f102b6"). InnerVolumeSpecName "kube-api-access-plffg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.728915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ea1e817-c0c8-4f44-8c70-167358f102b6" (UID: "8ea1e817-c0c8-4f44-8c70-167358f102b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.763213 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.763477 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ea1e817-c0c8-4f44-8c70-167358f102b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.763619 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plffg\" (UniqueName: \"kubernetes.io/projected/8ea1e817-c0c8-4f44-8c70-167358f102b6-kube-api-access-plffg\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.822720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnjt5" event={"ID":"a59f78bc-e181-4d2f-9415-32d82f902e9f","Type":"ContainerStarted","Data":"826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c"} Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.825364 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bppj6" event={"ID":"8ea1e817-c0c8-4f44-8c70-167358f102b6","Type":"ContainerDied","Data":"38e38502c4186875d236a53546d9f295cb7215c4b8ad58ce2ac0d0e80790dece"} Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.825476 4858 scope.go:117] "RemoveContainer" containerID="dcae2cde0132041a00c1e2d36a314d4b07f404e5155755b91bbcd16ca790d8ed" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.825654 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bppj6" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.845466 4858 scope.go:117] "RemoveContainer" containerID="f9044f20f520269fe50cb9f672b8af3d8d9cc0e94e23194e8143e8b542030490" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.861571 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bppj6"] Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.864494 4858 scope.go:117] "RemoveContainer" containerID="6674b88908ea5dccfe8b0858b179cba80d4b5bd66d56bc6b52f4be956b63b032" Jan 27 20:25:00 crc kubenswrapper[4858]: I0127 20:25:00.866523 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bppj6"] Jan 27 20:25:01 crc kubenswrapper[4858]: I0127 20:25:01.832996 4858 generic.go:334] "Generic (PLEG): container finished" podID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerID="826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c" exitCode=0 Jan 27 20:25:01 crc kubenswrapper[4858]: I0127 20:25:01.833073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnjt5" event={"ID":"a59f78bc-e181-4d2f-9415-32d82f902e9f","Type":"ContainerDied","Data":"826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c"} Jan 27 20:25:02 crc kubenswrapper[4858]: I0127 20:25:02.087515 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" path="/var/lib/kubelet/pods/8ea1e817-c0c8-4f44-8c70-167358f102b6/volumes" Jan 27 20:25:02 crc kubenswrapper[4858]: I0127 20:25:02.843749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnjt5" event={"ID":"a59f78bc-e181-4d2f-9415-32d82f902e9f","Type":"ContainerStarted","Data":"52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530"} Jan 27 20:25:02 crc kubenswrapper[4858]: I0127 20:25:02.875077 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xnjt5" podStartSLOduration=2.454087689 podStartE2EDuration="4.875043272s" podCreationTimestamp="2026-01-27 20:24:58 +0000 UTC" firstStartedPulling="2026-01-27 20:24:59.816972159 +0000 UTC m=+1044.524787865" lastFinishedPulling="2026-01-27 20:25:02.237927742 +0000 UTC m=+1046.945743448" observedRunningTime="2026-01-27 20:25:02.864804776 +0000 UTC m=+1047.572620552" watchObservedRunningTime="2026-01-27 20:25:02.875043272 +0000 UTC m=+1047.582859028" Jan 27 20:25:04 crc kubenswrapper[4858]: I0127 20:25:04.517511 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:25:04 crc kubenswrapper[4858]: I0127 20:25:04.517870 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:25:04 crc kubenswrapper[4858]: I0127 20:25:04.563513 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:25:04 crc kubenswrapper[4858]: I0127 20:25:04.885051 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-w7wsb" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.981813 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6"] Jan 27 20:25:06 crc kubenswrapper[4858]: E0127 20:25:06.982336 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="extract-utilities" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.982348 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="extract-utilities" Jan 27 20:25:06 crc kubenswrapper[4858]: E0127 20:25:06.982360 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="extract-content" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.982367 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="extract-content" Jan 27 20:25:06 crc kubenswrapper[4858]: E0127 20:25:06.982380 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="registry-server" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.982385 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="registry-server" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.982495 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea1e817-c0c8-4f44-8c70-167358f102b6" containerName="registry-server" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.983499 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.985804 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bzx45" Jan 27 20:25:06 crc kubenswrapper[4858]: I0127 20:25:06.995396 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6"] Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.170802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-util\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.170896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-bundle\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.170963 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvcp\" (UniqueName: \"kubernetes.io/projected/52190d7c-3903-46b5-8fa4-96ef6b154bbe-kube-api-access-wtvcp\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.272326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-bundle\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.272436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtvcp\" (UniqueName: \"kubernetes.io/projected/52190d7c-3903-46b5-8fa4-96ef6b154bbe-kube-api-access-wtvcp\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.272467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-util\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.272898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-bundle\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.273236 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-util\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.299515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtvcp\" (UniqueName: \"kubernetes.io/projected/52190d7c-3903-46b5-8fa4-96ef6b154bbe-kube-api-access-wtvcp\") pod \"32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.309361 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.711716 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6"] Jan 27 20:25:07 crc kubenswrapper[4858]: I0127 20:25:07.887780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" event={"ID":"52190d7c-3903-46b5-8fa4-96ef6b154bbe","Type":"ContainerStarted","Data":"70afd149de69c0db858cc34f70a97d94d12e03ffecf875b1e53c95b906b6a2e5"} Jan 27 20:25:08 crc kubenswrapper[4858]: I0127 20:25:08.484580 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:25:08 crc kubenswrapper[4858]: I0127 20:25:08.484629 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:25:08 crc kubenswrapper[4858]: I0127 20:25:08.557346 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:25:08 crc kubenswrapper[4858]: I0127 20:25:08.895597 4858 generic.go:334] "Generic (PLEG): container finished" podID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerID="6efb6651e596c7503afc5a2329dcd703a14ebd10f1a508a59fb34490c5199927" exitCode=0 Jan 27 20:25:08 crc kubenswrapper[4858]: I0127 20:25:08.895685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" event={"ID":"52190d7c-3903-46b5-8fa4-96ef6b154bbe","Type":"ContainerDied","Data":"6efb6651e596c7503afc5a2329dcd703a14ebd10f1a508a59fb34490c5199927"} Jan 27 20:25:08 crc kubenswrapper[4858]: I0127 20:25:08.938855 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:25:09 crc kubenswrapper[4858]: I0127 20:25:09.907594 4858 generic.go:334] "Generic (PLEG): container finished" podID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerID="aa19a795163476bd3e139b568002160882948866decf50b06ecf7d98bcba7ba1" exitCode=0 Jan 27 20:25:09 crc kubenswrapper[4858]: I0127 20:25:09.907737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" event={"ID":"52190d7c-3903-46b5-8fa4-96ef6b154bbe","Type":"ContainerDied","Data":"aa19a795163476bd3e139b568002160882948866decf50b06ecf7d98bcba7ba1"} Jan 27 20:25:10 crc kubenswrapper[4858]: I0127 20:25:10.917412 4858 generic.go:334] "Generic (PLEG): container finished" podID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerID="3c91805b62726113b503912371968dbb6cb510bc881cb2220a498cbee42aeda0" exitCode=0 Jan 27 20:25:10 crc kubenswrapper[4858]: I0127 20:25:10.917496 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" event={"ID":"52190d7c-3903-46b5-8fa4-96ef6b154bbe","Type":"ContainerDied","Data":"3c91805b62726113b503912371968dbb6cb510bc881cb2220a498cbee42aeda0"} Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.195101 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.350188 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-util\") pod \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.355035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-bundle\") pod \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.355094 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtvcp\" (UniqueName: \"kubernetes.io/projected/52190d7c-3903-46b5-8fa4-96ef6b154bbe-kube-api-access-wtvcp\") pod \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\" (UID: \"52190d7c-3903-46b5-8fa4-96ef6b154bbe\") " Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.355691 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-bundle" (OuterVolumeSpecName: "bundle") pod "52190d7c-3903-46b5-8fa4-96ef6b154bbe" (UID: "52190d7c-3903-46b5-8fa4-96ef6b154bbe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.362835 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52190d7c-3903-46b5-8fa4-96ef6b154bbe-kube-api-access-wtvcp" (OuterVolumeSpecName: "kube-api-access-wtvcp") pod "52190d7c-3903-46b5-8fa4-96ef6b154bbe" (UID: "52190d7c-3903-46b5-8fa4-96ef6b154bbe"). InnerVolumeSpecName "kube-api-access-wtvcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.366533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-util" (OuterVolumeSpecName: "util") pod "52190d7c-3903-46b5-8fa4-96ef6b154bbe" (UID: "52190d7c-3903-46b5-8fa4-96ef6b154bbe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.457031 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-util\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.457070 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52190d7c-3903-46b5-8fa4-96ef6b154bbe-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.457082 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtvcp\" (UniqueName: \"kubernetes.io/projected/52190d7c-3903-46b5-8fa4-96ef6b154bbe-kube-api-access-wtvcp\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.939572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" event={"ID":"52190d7c-3903-46b5-8fa4-96ef6b154bbe","Type":"ContainerDied","Data":"70afd149de69c0db858cc34f70a97d94d12e03ffecf875b1e53c95b906b6a2e5"} Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.939624 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70afd149de69c0db858cc34f70a97d94d12e03ffecf875b1e53c95b906b6a2e5" Jan 27 20:25:12 crc kubenswrapper[4858]: I0127 20:25:12.939711 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.337805 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnjt5"] Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.338380 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xnjt5" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="registry-server" containerID="cri-o://52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530" gracePeriod=2 Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.724670 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.885171 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmslq\" (UniqueName: \"kubernetes.io/projected/a59f78bc-e181-4d2f-9415-32d82f902e9f-kube-api-access-lmslq\") pod \"a59f78bc-e181-4d2f-9415-32d82f902e9f\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.885308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-catalog-content\") pod \"a59f78bc-e181-4d2f-9415-32d82f902e9f\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.885436 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-utilities\") pod \"a59f78bc-e181-4d2f-9415-32d82f902e9f\" (UID: \"a59f78bc-e181-4d2f-9415-32d82f902e9f\") " Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.886417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-utilities" (OuterVolumeSpecName: "utilities") pod "a59f78bc-e181-4d2f-9415-32d82f902e9f" (UID: "a59f78bc-e181-4d2f-9415-32d82f902e9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.895819 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59f78bc-e181-4d2f-9415-32d82f902e9f-kube-api-access-lmslq" (OuterVolumeSpecName: "kube-api-access-lmslq") pod "a59f78bc-e181-4d2f-9415-32d82f902e9f" (UID: "a59f78bc-e181-4d2f-9415-32d82f902e9f"). InnerVolumeSpecName "kube-api-access-lmslq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.941561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a59f78bc-e181-4d2f-9415-32d82f902e9f" (UID: "a59f78bc-e181-4d2f-9415-32d82f902e9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.955165 4858 generic.go:334] "Generic (PLEG): container finished" podID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerID="52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530" exitCode=0 Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.955207 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnjt5" event={"ID":"a59f78bc-e181-4d2f-9415-32d82f902e9f","Type":"ContainerDied","Data":"52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530"} Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.955235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnjt5" event={"ID":"a59f78bc-e181-4d2f-9415-32d82f902e9f","Type":"ContainerDied","Data":"5e9096ea0a9c4307af7bd9b6980c3bd4eaa1e970387bc13e8a757f256550117b"} Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.955240 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnjt5" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.955253 4858 scope.go:117] "RemoveContainer" containerID="52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.976239 4858 scope.go:117] "RemoveContainer" containerID="826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.985754 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnjt5"] Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.986914 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmslq\" (UniqueName: \"kubernetes.io/projected/a59f78bc-e181-4d2f-9415-32d82f902e9f-kube-api-access-lmslq\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.986940 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.986950 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a59f78bc-e181-4d2f-9415-32d82f902e9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:25:14 crc kubenswrapper[4858]: I0127 20:25:14.993195 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xnjt5"] Jan 27 20:25:15 crc kubenswrapper[4858]: I0127 20:25:15.010218 4858 scope.go:117] "RemoveContainer" containerID="b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c" Jan 27 20:25:15 crc kubenswrapper[4858]: I0127 20:25:15.027969 4858 scope.go:117] "RemoveContainer" containerID="52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530" Jan 27 20:25:15 crc kubenswrapper[4858]: E0127 20:25:15.028591 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530\": container with ID starting with 52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530 not found: ID does not exist" containerID="52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530" Jan 27 20:25:15 crc kubenswrapper[4858]: I0127 20:25:15.028645 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530"} err="failed to get container status \"52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530\": rpc error: code = NotFound desc = could not find container \"52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530\": container with ID starting with 52db9199d7a5523978b8f1d5af4a2dff0a1f179de8abf6bde4bc417558904530 not found: ID does not exist" Jan 27 20:25:15 crc kubenswrapper[4858]: I0127 20:25:15.028673 4858 scope.go:117] "RemoveContainer" containerID="826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c" Jan 27 20:25:15 crc kubenswrapper[4858]: E0127 20:25:15.029227 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c\": container with ID starting with 826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c not found: ID does not exist" containerID="826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c" Jan 27 20:25:15 crc kubenswrapper[4858]: I0127 20:25:15.029304 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c"} err="failed to get container status \"826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c\": rpc error: code = NotFound desc = could not find container \"826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c\": container with ID starting with 826829c358f438057ee9dff0c61ead937e8cd605773d1d1edc6342f0060fcc9c not found: ID does not exist" Jan 27 20:25:15 crc kubenswrapper[4858]: I0127 20:25:15.029350 4858 scope.go:117] "RemoveContainer" containerID="b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c" Jan 27 20:25:15 crc kubenswrapper[4858]: E0127 20:25:15.029783 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c\": container with ID starting with b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c not found: ID does not exist" containerID="b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c" Jan 27 20:25:15 crc kubenswrapper[4858]: I0127 20:25:15.029823 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c"} err="failed to get container status \"b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c\": rpc error: code = NotFound desc = could not find container \"b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c\": container with ID starting with b56efc6b9e88e99a146ff77784515012a695947eb53a29e4477a0c76b364227c not found: ID does not exist" Jan 27 20:25:16 crc kubenswrapper[4858]: I0127 20:25:16.079706 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" path="/var/lib/kubelet/pods/a59f78bc-e181-4d2f-9415-32d82f902e9f/volumes" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.348768 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg"] Jan 27 20:25:18 crc kubenswrapper[4858]: E0127 20:25:18.349272 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerName="extract" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349283 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerName="extract" Jan 27 20:25:18 crc kubenswrapper[4858]: E0127 20:25:18.349300 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerName="util" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349306 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerName="util" Jan 27 20:25:18 crc kubenswrapper[4858]: E0127 20:25:18.349314 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="extract-utilities" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349320 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="extract-utilities" Jan 27 20:25:18 crc kubenswrapper[4858]: E0127 20:25:18.349331 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerName="pull" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349337 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerName="pull" Jan 27 20:25:18 crc kubenswrapper[4858]: E0127 20:25:18.349350 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="registry-server" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349356 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="registry-server" Jan 27 20:25:18 crc kubenswrapper[4858]: E0127 20:25:18.349366 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="extract-content" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349372 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="extract-content" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349494 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="52190d7c-3903-46b5-8fa4-96ef6b154bbe" containerName="extract" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.349512 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59f78bc-e181-4d2f-9415-32d82f902e9f" containerName="registry-server" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.350005 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.352794 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vtn9n" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.375726 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg"] Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.549473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gww2\" (UniqueName: \"kubernetes.io/projected/307753ad-bb67-4220-9b56-e588037652f4-kube-api-access-8gww2\") pod \"openstack-operator-controller-init-6f9f75d44c-9lgbg\" (UID: \"307753ad-bb67-4220-9b56-e588037652f4\") " pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.650861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gww2\" (UniqueName: \"kubernetes.io/projected/307753ad-bb67-4220-9b56-e588037652f4-kube-api-access-8gww2\") pod \"openstack-operator-controller-init-6f9f75d44c-9lgbg\" (UID: \"307753ad-bb67-4220-9b56-e588037652f4\") " pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.672817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gww2\" (UniqueName: \"kubernetes.io/projected/307753ad-bb67-4220-9b56-e588037652f4-kube-api-access-8gww2\") pod \"openstack-operator-controller-init-6f9f75d44c-9lgbg\" (UID: \"307753ad-bb67-4220-9b56-e588037652f4\") " pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" Jan 27 20:25:18 crc kubenswrapper[4858]: I0127 20:25:18.967079 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" Jan 27 20:25:19 crc kubenswrapper[4858]: I0127 20:25:19.388620 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg"] Jan 27 20:25:19 crc kubenswrapper[4858]: I0127 20:25:19.993885 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" event={"ID":"307753ad-bb67-4220-9b56-e588037652f4","Type":"ContainerStarted","Data":"6426067625c3e294b7df0a6bb6064297b8115c11501402e7ff3bbefb4996169a"} Jan 27 20:25:24 crc kubenswrapper[4858]: I0127 20:25:24.028653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" event={"ID":"307753ad-bb67-4220-9b56-e588037652f4","Type":"ContainerStarted","Data":"0c7540e96dc3fee49fc1ebc5f540cf27d601db3fc0330f0f90162e8e950c3441"} Jan 27 20:25:24 crc kubenswrapper[4858]: I0127 20:25:24.029771 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" Jan 27 20:25:24 crc kubenswrapper[4858]: I0127 20:25:24.079011 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" podStartSLOduration=2.460109186 podStartE2EDuration="6.078990573s" podCreationTimestamp="2026-01-27 20:25:18 +0000 UTC" firstStartedPulling="2026-01-27 20:25:19.397116707 +0000 UTC m=+1064.104932413" lastFinishedPulling="2026-01-27 20:25:23.015998094 +0000 UTC m=+1067.723813800" observedRunningTime="2026-01-27 20:25:24.074817086 +0000 UTC m=+1068.782632812" watchObservedRunningTime="2026-01-27 20:25:24.078990573 +0000 UTC m=+1068.786806289" Jan 27 20:25:28 crc kubenswrapper[4858]: I0127 20:25:28.968913 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6f9f75d44c-9lgbg" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.421842 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.423515 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.428080 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mblhq" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.429032 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.430074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.444648 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bvgzg" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.450931 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.457566 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.465412 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.466528 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.470914 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hmkrm" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.481622 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.492664 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.493930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.498390 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6lq2q" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.530653 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.532720 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.535592 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-f9bss" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.552381 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.588338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndhst\" (UniqueName: \"kubernetes.io/projected/50605190-4834-4573-b8c9-70f5ca60b820-kube-api-access-ndhst\") pod \"cinder-operator-controller-manager-7478f7dbf9-6tsd5\" (UID: \"50605190-4834-4573-b8c9-70f5ca60b820\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.588440 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzrmp\" (UniqueName: \"kubernetes.io/projected/fd5e8600-d46a-4463-b592-f6d6025bf66f-kube-api-access-jzrmp\") pod \"designate-operator-controller-manager-b45d7bf98-qlssw\" (UID: \"fd5e8600-d46a-4463-b592-f6d6025bf66f\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.588475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvvdj\" (UniqueName: \"kubernetes.io/projected/f2bb693c-1d95-483e-b7c5-151516bd015e-kube-api-access-lvvdj\") pod \"barbican-operator-controller-manager-7f86f8796f-pl99n\" (UID: \"f2bb693c-1d95-483e-b7c5-151516bd015e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.602218 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.631467 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.632334 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.632961 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.633070 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.639971 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x4g29" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.640195 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zwvz6" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.643824 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.644785 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.644918 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.654599 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qgxh7" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.662927 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.664073 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.673743 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-npzwv" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.680042 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.689432 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzrmp\" (UniqueName: \"kubernetes.io/projected/fd5e8600-d46a-4463-b592-f6d6025bf66f-kube-api-access-jzrmp\") pod \"designate-operator-controller-manager-b45d7bf98-qlssw\" (UID: \"fd5e8600-d46a-4463-b592-f6d6025bf66f\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.689477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvvdj\" (UniqueName: \"kubernetes.io/projected/f2bb693c-1d95-483e-b7c5-151516bd015e-kube-api-access-lvvdj\") pod \"barbican-operator-controller-manager-7f86f8796f-pl99n\" (UID: \"f2bb693c-1d95-483e-b7c5-151516bd015e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.689502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgwbj\" (UniqueName: \"kubernetes.io/projected/6074b126-8795-48bc-8984-fc25402032a2-kube-api-access-xgwbj\") pod \"heat-operator-controller-manager-594c8c9d5d-m4pbf\" (UID: \"6074b126-8795-48bc-8984-fc25402032a2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.689586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndhst\" (UniqueName: \"kubernetes.io/projected/50605190-4834-4573-b8c9-70f5ca60b820-kube-api-access-ndhst\") pod \"cinder-operator-controller-manager-7478f7dbf9-6tsd5\" (UID: \"50605190-4834-4573-b8c9-70f5ca60b820\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.689612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mft5\" (UniqueName: \"kubernetes.io/projected/eba796fd-f7a8-4f83-9a75-7036f77d73f1-kube-api-access-8mft5\") pod \"glance-operator-controller-manager-78fdd796fd-hg2t5\" (UID: \"eba796fd-f7a8-4f83-9a75-7036f77d73f1\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.699973 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.716420 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.720770 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndhst\" (UniqueName: \"kubernetes.io/projected/50605190-4834-4573-b8c9-70f5ca60b820-kube-api-access-ndhst\") pod \"cinder-operator-controller-manager-7478f7dbf9-6tsd5\" (UID: \"50605190-4834-4573-b8c9-70f5ca60b820\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.722670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvvdj\" (UniqueName: \"kubernetes.io/projected/f2bb693c-1d95-483e-b7c5-151516bd015e-kube-api-access-lvvdj\") pod \"barbican-operator-controller-manager-7f86f8796f-pl99n\" (UID: \"f2bb693c-1d95-483e-b7c5-151516bd015e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.730985 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.750187 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.751027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.751198 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.751811 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.752153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.755802 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.756669 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-srz49" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.756827 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6wk2f" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.761014 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzrmp\" (UniqueName: \"kubernetes.io/projected/fd5e8600-d46a-4463-b592-f6d6025bf66f-kube-api-access-jzrmp\") pod \"designate-operator-controller-manager-b45d7bf98-qlssw\" (UID: \"fd5e8600-d46a-4463-b592-f6d6025bf66f\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.764597 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.772429 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.773978 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.776460 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-sgkmg" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.784313 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.790111 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.798087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgwbj\" (UniqueName: \"kubernetes.io/projected/6074b126-8795-48bc-8984-fc25402032a2-kube-api-access-xgwbj\") pod \"heat-operator-controller-manager-594c8c9d5d-m4pbf\" (UID: \"6074b126-8795-48bc-8984-fc25402032a2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.798397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.798617 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4fr\" (UniqueName: \"kubernetes.io/projected/feb30e7d-db27-4e87-ba07-f4730b228588-kube-api-access-cp4fr\") pod \"neutron-operator-controller-manager-78d58447c5-m6lz4\" (UID: \"feb30e7d-db27-4e87-ba07-f4730b228588\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.798765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd54v\" (UniqueName: \"kubernetes.io/projected/f5527334-db65-4031-a24f-9aafcffb6708-kube-api-access-vd54v\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.798896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6v5\" (UniqueName: \"kubernetes.io/projected/8a5eb91f-e957-4f9d-86c9-5f8905c6bee4-kube-api-access-db6v5\") pod \"keystone-operator-controller-manager-b8b6d4659-dxfwr\" (UID: \"8a5eb91f-e957-4f9d-86c9-5f8905c6bee4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.799012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvr5w\" (UniqueName: \"kubernetes.io/projected/e86b137e-cd0c-4243-801f-dad4eb19373b-kube-api-access-bvr5w\") pod \"horizon-operator-controller-manager-77d5c5b54f-6rrnl\" (UID: \"e86b137e-cd0c-4243-801f-dad4eb19373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.799209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mft5\" (UniqueName: \"kubernetes.io/projected/eba796fd-f7a8-4f83-9a75-7036f77d73f1-kube-api-access-8mft5\") pod \"glance-operator-controller-manager-78fdd796fd-hg2t5\" (UID: \"eba796fd-f7a8-4f83-9a75-7036f77d73f1\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.799407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5p5n\" (UniqueName: \"kubernetes.io/projected/397758a8-62c2-41ba-8177-5309d797bb2f-kube-api-access-m5p5n\") pod \"ironic-operator-controller-manager-598f7747c9-k69nl\" (UID: \"397758a8-62c2-41ba-8177-5309d797bb2f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.815378 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.816614 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.820449 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bt4vt" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.826622 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.834075 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgwbj\" (UniqueName: \"kubernetes.io/projected/6074b126-8795-48bc-8984-fc25402032a2-kube-api-access-xgwbj\") pod \"heat-operator-controller-manager-594c8c9d5d-m4pbf\" (UID: \"6074b126-8795-48bc-8984-fc25402032a2\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.835021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mft5\" (UniqueName: \"kubernetes.io/projected/eba796fd-f7a8-4f83-9a75-7036f77d73f1-kube-api-access-8mft5\") pod \"glance-operator-controller-manager-78fdd796fd-hg2t5\" (UID: \"eba796fd-f7a8-4f83-9a75-7036f77d73f1\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.855765 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.856666 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.865642 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bx9rs" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.869248 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.890184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.891959 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvr5w\" (UniqueName: \"kubernetes.io/projected/e86b137e-cd0c-4243-801f-dad4eb19373b-kube-api-access-bvr5w\") pod \"horizon-operator-controller-manager-77d5c5b54f-6rrnl\" (UID: \"e86b137e-cd0c-4243-801f-dad4eb19373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903590 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rbpf\" (UniqueName: \"kubernetes.io/projected/74b2bb8d-cae5-4033-b999-73e3ed604cb9-kube-api-access-7rbpf\") pod \"octavia-operator-controller-manager-5f4cd88d46-dxhnn\" (UID: \"74b2bb8d-cae5-4033-b999-73e3ed604cb9\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903624 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5p5n\" (UniqueName: \"kubernetes.io/projected/397758a8-62c2-41ba-8177-5309d797bb2f-kube-api-access-m5p5n\") pod \"ironic-operator-controller-manager-598f7747c9-k69nl\" (UID: \"397758a8-62c2-41ba-8177-5309d797bb2f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903659 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djcl8\" (UniqueName: \"kubernetes.io/projected/7de86ff1-90b3-470b-bab1-344555db1153-kube-api-access-djcl8\") pod \"manila-operator-controller-manager-78c6999f6f-cnhqv\" (UID: \"7de86ff1-90b3-470b-bab1-344555db1153\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903678 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlqr2\" (UniqueName: \"kubernetes.io/projected/314a20ef-a97b-40a6-8a85-b118e64d9a3a-kube-api-access-tlqr2\") pod \"nova-operator-controller-manager-7bdb645866-f7gwl\" (UID: \"314a20ef-a97b-40a6-8a85-b118e64d9a3a\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4fr\" (UniqueName: \"kubernetes.io/projected/feb30e7d-db27-4e87-ba07-f4730b228588-kube-api-access-cp4fr\") pod \"neutron-operator-controller-manager-78d58447c5-m6lz4\" (UID: \"feb30e7d-db27-4e87-ba07-f4730b228588\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6cm\" (UniqueName: \"kubernetes.io/projected/446c00be-b860-4220-bcc1-457005d92650-kube-api-access-8p6cm\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-54b92\" (UID: \"446c00be-b860-4220-bcc1-457005d92650\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd54v\" (UniqueName: \"kubernetes.io/projected/f5527334-db65-4031-a24f-9aafcffb6708-kube-api-access-vd54v\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.903867 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6v5\" (UniqueName: \"kubernetes.io/projected/8a5eb91f-e957-4f9d-86c9-5f8905c6bee4-kube-api-access-db6v5\") pod \"keystone-operator-controller-manager-b8b6d4659-dxfwr\" (UID: \"8a5eb91f-e957-4f9d-86c9-5f8905c6bee4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" Jan 27 20:25:48 crc kubenswrapper[4858]: E0127 20:25:48.904788 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:48 crc kubenswrapper[4858]: E0127 20:25:48.904973 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert podName:f5527334-db65-4031-a24f-9aafcffb6708 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:49.404954681 +0000 UTC m=+1094.112770387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert") pod "infra-operator-controller-manager-694cf4f878-tskvm" (UID: "f5527334-db65-4031-a24f-9aafcffb6708") : secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.912727 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.913938 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.915859 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g4lf2" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.935619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4fr\" (UniqueName: \"kubernetes.io/projected/feb30e7d-db27-4e87-ba07-f4730b228588-kube-api-access-cp4fr\") pod \"neutron-operator-controller-manager-78d58447c5-m6lz4\" (UID: \"feb30e7d-db27-4e87-ba07-f4730b228588\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.940633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.944189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5p5n\" (UniqueName: \"kubernetes.io/projected/397758a8-62c2-41ba-8177-5309d797bb2f-kube-api-access-m5p5n\") pod \"ironic-operator-controller-manager-598f7747c9-k69nl\" (UID: \"397758a8-62c2-41ba-8177-5309d797bb2f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.944277 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvr5w\" (UniqueName: \"kubernetes.io/projected/e86b137e-cd0c-4243-801f-dad4eb19373b-kube-api-access-bvr5w\") pod \"horizon-operator-controller-manager-77d5c5b54f-6rrnl\" (UID: \"e86b137e-cd0c-4243-801f-dad4eb19373b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.946433 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.947291 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.949820 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.950591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6v5\" (UniqueName: \"kubernetes.io/projected/8a5eb91f-e957-4f9d-86c9-5f8905c6bee4-kube-api-access-db6v5\") pod \"keystone-operator-controller-manager-b8b6d4659-dxfwr\" (UID: \"8a5eb91f-e957-4f9d-86c9-5f8905c6bee4\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.951217 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xv8r7" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.951263 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd54v\" (UniqueName: \"kubernetes.io/projected/f5527334-db65-4031-a24f-9aafcffb6708-kube-api-access-vd54v\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.953729 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.966637 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.975391 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-bwjsz" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.980752 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.989778 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.995337 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt"] Jan 27 20:25:48 crc kubenswrapper[4858]: I0127 20:25:48.996231 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.005525 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9zjbf" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssk86\" (UniqueName: \"kubernetes.io/projected/b778d97d-e9dc-4017-94ff-9cfd82322a3a-kube-api-access-ssk86\") pod \"swift-operator-controller-manager-547cbdb99f-cp6lt\" (UID: \"b778d97d-e9dc-4017-94ff-9cfd82322a3a\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6cm\" (UniqueName: \"kubernetes.io/projected/446c00be-b860-4220-bcc1-457005d92650-kube-api-access-8p6cm\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-54b92\" (UID: \"446c00be-b860-4220-bcc1-457005d92650\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsl48\" (UniqueName: \"kubernetes.io/projected/968ee010-0e16-462d-82d3-7c5d61f107a1-kube-api-access-hsl48\") pod \"ovn-operator-controller-manager-6f75f45d54-jrl5h\" (UID: \"968ee010-0e16-462d-82d3-7c5d61f107a1\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006242 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkdh\" (UniqueName: \"kubernetes.io/projected/c4dfc413-8d91-4a08-aef6-47188c0971c4-kube-api-access-5tkdh\") pod \"placement-operator-controller-manager-79d5ccc684-8st2f\" (UID: \"c4dfc413-8d91-4a08-aef6-47188c0971c4\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006262 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rbpf\" (UniqueName: \"kubernetes.io/projected/74b2bb8d-cae5-4033-b999-73e3ed604cb9-kube-api-access-7rbpf\") pod \"octavia-operator-controller-manager-5f4cd88d46-dxhnn\" (UID: \"74b2bb8d-cae5-4033-b999-73e3ed604cb9\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djcl8\" (UniqueName: \"kubernetes.io/projected/7de86ff1-90b3-470b-bab1-344555db1153-kube-api-access-djcl8\") pod \"manila-operator-controller-manager-78c6999f6f-cnhqv\" (UID: \"7de86ff1-90b3-470b-bab1-344555db1153\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006307 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlqr2\" (UniqueName: \"kubernetes.io/projected/314a20ef-a97b-40a6-8a85-b118e64d9a3a-kube-api-access-tlqr2\") pod \"nova-operator-controller-manager-7bdb645866-f7gwl\" (UID: \"314a20ef-a97b-40a6-8a85-b118e64d9a3a\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.006337 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9vqm\" (UniqueName: \"kubernetes.io/projected/c3bd5d36-c726-4b79-9c08-22bb23dabc28-kube-api-access-l9vqm\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.015338 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.033538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.033588 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.065403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djcl8\" (UniqueName: \"kubernetes.io/projected/7de86ff1-90b3-470b-bab1-344555db1153-kube-api-access-djcl8\") pod \"manila-operator-controller-manager-78c6999f6f-cnhqv\" (UID: \"7de86ff1-90b3-470b-bab1-344555db1153\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.077190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.081687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rbpf\" (UniqueName: \"kubernetes.io/projected/74b2bb8d-cae5-4033-b999-73e3ed604cb9-kube-api-access-7rbpf\") pod \"octavia-operator-controller-manager-5f4cd88d46-dxhnn\" (UID: \"74b2bb8d-cae5-4033-b999-73e3ed604cb9\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.082229 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlqr2\" (UniqueName: \"kubernetes.io/projected/314a20ef-a97b-40a6-8a85-b118e64d9a3a-kube-api-access-tlqr2\") pod \"nova-operator-controller-manager-7bdb645866-f7gwl\" (UID: \"314a20ef-a97b-40a6-8a85-b118e64d9a3a\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.101635 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.102462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.111007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsl48\" (UniqueName: \"kubernetes.io/projected/968ee010-0e16-462d-82d3-7c5d61f107a1-kube-api-access-hsl48\") pod \"ovn-operator-controller-manager-6f75f45d54-jrl5h\" (UID: \"968ee010-0e16-462d-82d3-7c5d61f107a1\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.111057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tkdh\" (UniqueName: \"kubernetes.io/projected/c4dfc413-8d91-4a08-aef6-47188c0971c4-kube-api-access-5tkdh\") pod \"placement-operator-controller-manager-79d5ccc684-8st2f\" (UID: \"c4dfc413-8d91-4a08-aef6-47188c0971c4\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.111116 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9vqm\" (UniqueName: \"kubernetes.io/projected/c3bd5d36-c726-4b79-9c08-22bb23dabc28-kube-api-access-l9vqm\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.111205 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssk86\" (UniqueName: \"kubernetes.io/projected/b778d97d-e9dc-4017-94ff-9cfd82322a3a-kube-api-access-ssk86\") pod \"swift-operator-controller-manager-547cbdb99f-cp6lt\" (UID: \"b778d97d-e9dc-4017-94ff-9cfd82322a3a\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.111271 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.111466 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.111537 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert podName:c3bd5d36-c726-4b79-9c08-22bb23dabc28 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:49.611517632 +0000 UTC m=+1094.319333338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" (UID: "c3bd5d36-c726-4b79-9c08-22bb23dabc28") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.127281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6cm\" (UniqueName: \"kubernetes.io/projected/446c00be-b860-4220-bcc1-457005d92650-kube-api-access-8p6cm\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-54b92\" (UID: \"446c00be-b860-4220-bcc1-457005d92650\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.134247 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.140359 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tkdh\" (UniqueName: \"kubernetes.io/projected/c4dfc413-8d91-4a08-aef6-47188c0971c4-kube-api-access-5tkdh\") pod \"placement-operator-controller-manager-79d5ccc684-8st2f\" (UID: \"c4dfc413-8d91-4a08-aef6-47188c0971c4\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.150170 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.151279 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsl48\" (UniqueName: \"kubernetes.io/projected/968ee010-0e16-462d-82d3-7c5d61f107a1-kube-api-access-hsl48\") pod \"ovn-operator-controller-manager-6f75f45d54-jrl5h\" (UID: \"968ee010-0e16-462d-82d3-7c5d61f107a1\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.153882 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-cv4nh" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.157992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssk86\" (UniqueName: \"kubernetes.io/projected/b778d97d-e9dc-4017-94ff-9cfd82322a3a-kube-api-access-ssk86\") pod \"swift-operator-controller-manager-547cbdb99f-cp6lt\" (UID: \"b778d97d-e9dc-4017-94ff-9cfd82322a3a\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.182050 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.191157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.205741 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.208911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.252348 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9vqm\" (UniqueName: \"kubernetes.io/projected/c3bd5d36-c726-4b79-9c08-22bb23dabc28-kube-api-access-l9vqm\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.270593 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.279154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.285854 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.287314 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.290153 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6hnlz" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.330151 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gr8\" (UniqueName: \"kubernetes.io/projected/304980dc-cb07-41fa-ba11-1262d5a2b43b-kube-api-access-42gr8\") pod \"telemetry-operator-controller-manager-85cd9769bb-sgtcz\" (UID: \"304980dc-cb07-41fa-ba11-1262d5a2b43b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.337454 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.346491 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.440725 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.443099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gr8\" (UniqueName: \"kubernetes.io/projected/304980dc-cb07-41fa-ba11-1262d5a2b43b-kube-api-access-42gr8\") pod \"telemetry-operator-controller-manager-85cd9769bb-sgtcz\" (UID: \"304980dc-cb07-41fa-ba11-1262d5a2b43b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.443143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.443202 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm76p\" (UniqueName: \"kubernetes.io/projected/c15c4bec-780c-42d1-8f36-618b255a95f6-kube-api-access-gm76p\") pod \"test-operator-controller-manager-69797bbcbd-7w8hk\" (UID: \"c15c4bec-780c-42d1-8f36-618b255a95f6\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.443619 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.443680 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert podName:f5527334-db65-4031-a24f-9aafcffb6708 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:50.443664267 +0000 UTC m=+1095.151479973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert") pod "infra-operator-controller-manager-694cf4f878-tskvm" (UID: "f5527334-db65-4031-a24f-9aafcffb6708") : secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.444105 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.447657 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.451255 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-llhxl" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.453864 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.473848 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.475018 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.479364 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.479681 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2gf2v" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.480628 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.484258 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.485401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gr8\" (UniqueName: \"kubernetes.io/projected/304980dc-cb07-41fa-ba11-1262d5a2b43b-kube-api-access-42gr8\") pod \"telemetry-operator-controller-manager-85cd9769bb-sgtcz\" (UID: \"304980dc-cb07-41fa-ba11-1262d5a2b43b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.514606 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.515686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.518112 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.518617 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-cz77c" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.522817 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.544275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm76p\" (UniqueName: \"kubernetes.io/projected/c15c4bec-780c-42d1-8f36-618b255a95f6-kube-api-access-gm76p\") pod \"test-operator-controller-manager-69797bbcbd-7w8hk\" (UID: \"c15c4bec-780c-42d1-8f36-618b255a95f6\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.544395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwfc\" (UniqueName: \"kubernetes.io/projected/112cff1f-1841-4fe8-96e2-95d2be2957a2-kube-api-access-gkwfc\") pod \"watcher-operator-controller-manager-5975f685d8-snnk5\" (UID: \"112cff1f-1841-4fe8-96e2-95d2be2957a2\") " pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.576035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm76p\" (UniqueName: \"kubernetes.io/projected/c15c4bec-780c-42d1-8f36-618b255a95f6-kube-api-access-gm76p\") pod \"test-operator-controller-manager-69797bbcbd-7w8hk\" (UID: \"c15c4bec-780c-42d1-8f36-618b255a95f6\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.631040 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.646766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.646846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4b4d\" (UniqueName: \"kubernetes.io/projected/f75129ba-73c8-4f91-99b0-42d191fb0510-kube-api-access-f4b4d\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.646885 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwfc\" (UniqueName: \"kubernetes.io/projected/112cff1f-1841-4fe8-96e2-95d2be2957a2-kube-api-access-gkwfc\") pod \"watcher-operator-controller-manager-5975f685d8-snnk5\" (UID: \"112cff1f-1841-4fe8-96e2-95d2be2957a2\") " pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.646947 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.646967 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.647035 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nvhb\" (UniqueName: \"kubernetes.io/projected/9e4c347f-b102-40c1-8935-77fdef528d14-kube-api-access-7nvhb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tc2j8\" (UID: \"9e4c347f-b102-40c1-8935-77fdef528d14\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.651801 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.651862 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert podName:c3bd5d36-c726-4b79-9c08-22bb23dabc28 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:50.651846033 +0000 UTC m=+1095.359661739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" (UID: "c3bd5d36-c726-4b79-9c08-22bb23dabc28") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.666192 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.669726 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw"] Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.696529 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwfc\" (UniqueName: \"kubernetes.io/projected/112cff1f-1841-4fe8-96e2-95d2be2957a2-kube-api-access-gkwfc\") pod \"watcher-operator-controller-manager-5975f685d8-snnk5\" (UID: \"112cff1f-1841-4fe8-96e2-95d2be2957a2\") " pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.701717 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.750401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.750459 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nvhb\" (UniqueName: \"kubernetes.io/projected/9e4c347f-b102-40c1-8935-77fdef528d14-kube-api-access-7nvhb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tc2j8\" (UID: \"9e4c347f-b102-40c1-8935-77fdef528d14\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.750498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.750626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4b4d\" (UniqueName: \"kubernetes.io/projected/f75129ba-73c8-4f91-99b0-42d191fb0510-kube-api-access-f4b4d\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.751097 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.751148 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:50.251129902 +0000 UTC m=+1094.958945608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "webhook-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.751477 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: E0127 20:25:49.751512 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:50.251502062 +0000 UTC m=+1094.959317778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "metrics-server-cert" not found Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.813819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4b4d\" (UniqueName: \"kubernetes.io/projected/f75129ba-73c8-4f91-99b0-42d191fb0510-kube-api-access-f4b4d\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.816722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nvhb\" (UniqueName: \"kubernetes.io/projected/9e4c347f-b102-40c1-8935-77fdef528d14-kube-api-access-7nvhb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tc2j8\" (UID: \"9e4c347f-b102-40c1-8935-77fdef528d14\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.865435 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" Jan 27 20:25:49 crc kubenswrapper[4858]: I0127 20:25:49.907092 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.133311 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.274757 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.274825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.274981 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.275042 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:51.275023394 +0000 UTC m=+1095.982839100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "metrics-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.275483 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.275513 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:51.275503557 +0000 UTC m=+1095.983319263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "webhook-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.344115 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" event={"ID":"fd5e8600-d46a-4463-b592-f6d6025bf66f","Type":"ContainerStarted","Data":"54dda5c1531ed15cb81ce758aab75722ad1f79a64ca018925379974dc6ac9f48"} Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.348229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" event={"ID":"f2bb693c-1d95-483e-b7c5-151516bd015e","Type":"ContainerStarted","Data":"23707a5c27771f0d714e6294a13f090151c0e41e68a3dfde4749bcf2f89c2384"} Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.351455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" event={"ID":"50605190-4834-4573-b8c9-70f5ca60b820","Type":"ContainerStarted","Data":"007fbb8ba4cc3b9f39ad5be0a437113c49ed9b7b9d22df97f2c2de14292f806b"} Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.388947 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4"] Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.391745 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6074b126_8795_48bc_8984_fc25402032a2.slice/crio-52845eeb4f8aefe5a461bf2c6e69e6507ce1dd0a80a19de082fd285771f4cb30 WatchSource:0}: Error finding container 52845eeb4f8aefe5a461bf2c6e69e6507ce1dd0a80a19de082fd285771f4cb30: Status 404 returned error can't find the container with id 52845eeb4f8aefe5a461bf2c6e69e6507ce1dd0a80a19de082fd285771f4cb30 Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.396245 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr"] Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.401465 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod397758a8_62c2_41ba_8177_5309d797bb2f.slice/crio-344380b752f65b5f91448d4ac9d98d37c376f800ead0eb4f2f7584ab362a9b28 WatchSource:0}: Error finding container 344380b752f65b5f91448d4ac9d98d37c376f800ead0eb4f2f7584ab362a9b28: Status 404 returned error can't find the container with id 344380b752f65b5f91448d4ac9d98d37c376f800ead0eb4f2f7584ab362a9b28 Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.407431 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.417522 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.424149 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.430106 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.482464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.482864 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.483952 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert podName:f5527334-db65-4031-a24f-9aafcffb6708 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:52.483933279 +0000 UTC m=+1097.191748985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert") pod "infra-operator-controller-manager-694cf4f878-tskvm" (UID: "f5527334-db65-4031-a24f-9aafcffb6708") : secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.689573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.689762 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.689817 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert podName:c3bd5d36-c726-4b79-9c08-22bb23dabc28 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:52.689801021 +0000 UTC m=+1097.397616727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" (UID: "c3bd5d36-c726-4b79-9c08-22bb23dabc28") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.766753 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.782336 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn"] Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.813923 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb778d97d_e9dc_4017_94ff_9cfd82322a3a.slice/crio-d2a5989dd0616df0a50699ef28d0f9768d29c52d47cd483f47acbed4fa1e2c66 WatchSource:0}: Error finding container d2a5989dd0616df0a50699ef28d0f9768d29c52d47cd483f47acbed4fa1e2c66: Status 404 returned error can't find the container with id d2a5989dd0616df0a50699ef28d0f9768d29c52d47cd483f47acbed4fa1e2c66 Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.817526 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz"] Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.826983 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod304980dc_cb07_41fa_ba11_1262d5a2b43b.slice/crio-3ade75e6d95512e541ad8617c7a9a9d7c6efd5debd913021ec5f22b592d0ab76 WatchSource:0}: Error finding container 3ade75e6d95512e541ad8617c7a9a9d7c6efd5debd913021ec5f22b592d0ab76: Status 404 returned error can't find the container with id 3ade75e6d95512e541ad8617c7a9a9d7c6efd5debd913021ec5f22b592d0ab76 Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.827532 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.842089 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f"] Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.848768 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7de86ff1_90b3_470b_bab1_344555db1153.slice/crio-470555a723cb9de964ff4b257cd09fb615493cac3e4e2f0fe30d5f229ed4f7c3 WatchSource:0}: Error finding container 470555a723cb9de964ff4b257cd09fb615493cac3e4e2f0fe30d5f229ed4f7c3: Status 404 returned error can't find the container with id 470555a723cb9de964ff4b257cd09fb615493cac3e4e2f0fe30d5f229ed4f7c3 Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.849614 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk"] Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.853364 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4dfc413_8d91_4a08_aef6_47188c0971c4.slice/crio-bab70e39bb2b5e3f62888597d407174912423327dc5d295a311ed472030170ed WatchSource:0}: Error finding container bab70e39bb2b5e3f62888597d407174912423327dc5d295a311ed472030170ed: Status 404 returned error can't find the container with id bab70e39bb2b5e3f62888597d407174912423327dc5d295a311ed472030170ed Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.855898 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod314a20ef_a97b_40a6_8a85_b118e64d9a3a.slice/crio-90bb59acd2aefb9309362346c4814b822595e906d2bebdb6ba2f512be60c2c66 WatchSource:0}: Error finding container 90bb59acd2aefb9309362346c4814b822595e906d2bebdb6ba2f512be60c2c66: Status 404 returned error can't find the container with id 90bb59acd2aefb9309362346c4814b822595e906d2bebdb6ba2f512be60c2c66 Jan 27 20:25:50 crc kubenswrapper[4858]: W0127 20:25:50.856308 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e4c347f_b102_40c1_8935_77fdef528d14.slice/crio-4264d188956870ec0a14f72203a037ecb4605da6b800d8a2eceb892fcce6f5be WatchSource:0}: Error finding container 4264d188956870ec0a14f72203a037ecb4605da6b800d8a2eceb892fcce6f5be: Status 404 returned error can't find the container with id 4264d188956870ec0a14f72203a037ecb4605da6b800d8a2eceb892fcce6f5be Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.857533 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h"] Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.859795 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djcl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-cnhqv_openstack-operators(7de86ff1-90b3-470b-bab1-344555db1153): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.861013 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" podUID="7de86ff1-90b3-470b-bab1-344555db1153" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.864912 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7nvhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tc2j8_openstack-operators(9e4c347f-b102-40c1-8935-77fdef528d14): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.866722 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" podUID="9e4c347f-b102-40c1-8935-77fdef528d14" Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.874280 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt"] Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.877645 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.46:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gkwfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5975f685d8-snnk5_openstack-operators(112cff1f-1841-4fe8-96e2-95d2be2957a2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.878792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" podUID="112cff1f-1841-4fe8-96e2-95d2be2957a2" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.881488 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlqr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-f7gwl_openstack-operators(314a20ef-a97b-40a6-8a85-b118e64d9a3a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.883021 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" podUID="314a20ef-a97b-40a6-8a85-b118e64d9a3a" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.886684 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tkdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-8st2f_openstack-operators(c4dfc413-8d91-4a08-aef6-47188c0971c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 20:25:50 crc kubenswrapper[4858]: E0127 20:25:50.888025 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" podUID="c4dfc413-8d91-4a08-aef6-47188c0971c4" Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.890620 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.897887 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5"] Jan 27 20:25:50 crc kubenswrapper[4858]: I0127 20:25:50.916809 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8"] Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.301317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.301396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.301667 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.301736 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:53.301718446 +0000 UTC m=+1098.009534152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "metrics-server-cert" not found Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.301749 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.301818 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:53.301801448 +0000 UTC m=+1098.009617154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "webhook-server-cert" not found Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.386715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" event={"ID":"74b2bb8d-cae5-4033-b999-73e3ed604cb9","Type":"ContainerStarted","Data":"1e7e08116e552808482d1bf40686033fad3c89c486c06920b54075553275ba96"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.395234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" event={"ID":"feb30e7d-db27-4e87-ba07-f4730b228588","Type":"ContainerStarted","Data":"433029bb5735ac41da80225eda8fb28c5c66df53f20d8aa278f15632a1f249d1"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.405033 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" event={"ID":"eba796fd-f7a8-4f83-9a75-7036f77d73f1","Type":"ContainerStarted","Data":"ad96b4e043f91f627c3b038485c2f8f38a051863bf1063aaf72836706a213045"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.414962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" event={"ID":"304980dc-cb07-41fa-ba11-1262d5a2b43b","Type":"ContainerStarted","Data":"3ade75e6d95512e541ad8617c7a9a9d7c6efd5debd913021ec5f22b592d0ab76"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.421290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" event={"ID":"e86b137e-cd0c-4243-801f-dad4eb19373b","Type":"ContainerStarted","Data":"e4850daadb88124dd4357c4559eddcc72689ab1f44b024d6255b41dece931477"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.425042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" event={"ID":"397758a8-62c2-41ba-8177-5309d797bb2f","Type":"ContainerStarted","Data":"344380b752f65b5f91448d4ac9d98d37c376f800ead0eb4f2f7584ab362a9b28"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.428332 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" event={"ID":"8a5eb91f-e957-4f9d-86c9-5f8905c6bee4","Type":"ContainerStarted","Data":"d3d2c2d92551d5e389d1022b6f9367e76e6a7c3f318b41ec2534e296b8d94f00"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.430883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" event={"ID":"b778d97d-e9dc-4017-94ff-9cfd82322a3a","Type":"ContainerStarted","Data":"d2a5989dd0616df0a50699ef28d0f9768d29c52d47cd483f47acbed4fa1e2c66"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.437100 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" event={"ID":"968ee010-0e16-462d-82d3-7c5d61f107a1","Type":"ContainerStarted","Data":"9abf71e35229d24c5a266529a4a782e2a980e93d4b701097c97bb6d84f8fc05a"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.440104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" event={"ID":"446c00be-b860-4220-bcc1-457005d92650","Type":"ContainerStarted","Data":"699a27c3530c06321123d6b28326526fe17929dec05a039e1078fdff49683624"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.442243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" event={"ID":"9e4c347f-b102-40c1-8935-77fdef528d14","Type":"ContainerStarted","Data":"4264d188956870ec0a14f72203a037ecb4605da6b800d8a2eceb892fcce6f5be"} Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.447355 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" podUID="9e4c347f-b102-40c1-8935-77fdef528d14" Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.451395 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" event={"ID":"c15c4bec-780c-42d1-8f36-618b255a95f6","Type":"ContainerStarted","Data":"1a3499da3b12a7a301aceadb8ce205189a724a78538b5910040d611c50b42b42"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.474535 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" event={"ID":"314a20ef-a97b-40a6-8a85-b118e64d9a3a","Type":"ContainerStarted","Data":"90bb59acd2aefb9309362346c4814b822595e906d2bebdb6ba2f512be60c2c66"} Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.477781 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" podUID="314a20ef-a97b-40a6-8a85-b118e64d9a3a" Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.487728 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" event={"ID":"c4dfc413-8d91-4a08-aef6-47188c0971c4","Type":"ContainerStarted","Data":"bab70e39bb2b5e3f62888597d407174912423327dc5d295a311ed472030170ed"} Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.489228 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" podUID="c4dfc413-8d91-4a08-aef6-47188c0971c4" Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.500518 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" event={"ID":"112cff1f-1841-4fe8-96e2-95d2be2957a2","Type":"ContainerStarted","Data":"fe7bce129a5242d5c63ec8e83debaa6c26a4619e51ef8e25a3f70cd949946a47"} Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.506804 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" podUID="112cff1f-1841-4fe8-96e2-95d2be2957a2" Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.507810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" event={"ID":"6074b126-8795-48bc-8984-fc25402032a2","Type":"ContainerStarted","Data":"52845eeb4f8aefe5a461bf2c6e69e6507ce1dd0a80a19de082fd285771f4cb30"} Jan 27 20:25:51 crc kubenswrapper[4858]: I0127 20:25:51.509821 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" event={"ID":"7de86ff1-90b3-470b-bab1-344555db1153","Type":"ContainerStarted","Data":"470555a723cb9de964ff4b257cd09fb615493cac3e4e2f0fe30d5f229ed4f7c3"} Jan 27 20:25:51 crc kubenswrapper[4858]: E0127 20:25:51.511541 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" podUID="7de86ff1-90b3-470b-bab1-344555db1153" Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.524835 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" podUID="314a20ef-a97b-40a6-8a85-b118e64d9a3a" Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.525112 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" podUID="c4dfc413-8d91-4a08-aef6-47188c0971c4" Jan 27 20:25:52 crc kubenswrapper[4858]: I0127 20:25:52.525135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.525204 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" podUID="7de86ff1-90b3-470b-bab1-344555db1153" Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.525235 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.525274 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/openstack-k8s-operators/watcher-operator:add353f857c04debbf620f926c6c19f4f45c7f75\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" podUID="112cff1f-1841-4fe8-96e2-95d2be2957a2" Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.525287 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert podName:f5527334-db65-4031-a24f-9aafcffb6708 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:56.525270059 +0000 UTC m=+1101.233085765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert") pod "infra-operator-controller-manager-694cf4f878-tskvm" (UID: "f5527334-db65-4031-a24f-9aafcffb6708") : secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.525173 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" podUID="9e4c347f-b102-40c1-8935-77fdef528d14" Jan 27 20:25:52 crc kubenswrapper[4858]: I0127 20:25:52.727640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.727810 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:52 crc kubenswrapper[4858]: E0127 20:25:52.727856 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert podName:c3bd5d36-c726-4b79-9c08-22bb23dabc28 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:56.727842198 +0000 UTC m=+1101.435657904 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" (UID: "c3bd5d36-c726-4b79-9c08-22bb23dabc28") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:53 crc kubenswrapper[4858]: I0127 20:25:53.338294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:53 crc kubenswrapper[4858]: I0127 20:25:53.338458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:53 crc kubenswrapper[4858]: E0127 20:25:53.338525 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 20:25:53 crc kubenswrapper[4858]: E0127 20:25:53.338604 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 20:25:53 crc kubenswrapper[4858]: E0127 20:25:53.338640 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:57.338619391 +0000 UTC m=+1102.046435097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "metrics-server-cert" not found Jan 27 20:25:53 crc kubenswrapper[4858]: E0127 20:25:53.338659 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:25:57.338651302 +0000 UTC m=+1102.046467008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "webhook-server-cert" not found Jan 27 20:25:56 crc kubenswrapper[4858]: I0127 20:25:56.592013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:25:56 crc kubenswrapper[4858]: E0127 20:25:56.592256 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:56 crc kubenswrapper[4858]: E0127 20:25:56.592532 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert podName:f5527334-db65-4031-a24f-9aafcffb6708 nodeName:}" failed. No retries permitted until 2026-01-27 20:26:04.592514726 +0000 UTC m=+1109.300330432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert") pod "infra-operator-controller-manager-694cf4f878-tskvm" (UID: "f5527334-db65-4031-a24f-9aafcffb6708") : secret "infra-operator-webhook-server-cert" not found Jan 27 20:25:56 crc kubenswrapper[4858]: I0127 20:25:56.795316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:25:56 crc kubenswrapper[4858]: E0127 20:25:56.795496 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:56 crc kubenswrapper[4858]: E0127 20:25:56.795604 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert podName:c3bd5d36-c726-4b79-9c08-22bb23dabc28 nodeName:}" failed. No retries permitted until 2026-01-27 20:26:04.795577723 +0000 UTC m=+1109.503393509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" (UID: "c3bd5d36-c726-4b79-9c08-22bb23dabc28") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 20:25:57 crc kubenswrapper[4858]: I0127 20:25:57.404802 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:57 crc kubenswrapper[4858]: I0127 20:25:57.404868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:25:57 crc kubenswrapper[4858]: E0127 20:25:57.404971 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 20:25:57 crc kubenswrapper[4858]: E0127 20:25:57.405011 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 20:25:57 crc kubenswrapper[4858]: E0127 20:25:57.405036 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:26:05.405018803 +0000 UTC m=+1110.112834509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "webhook-server-cert" not found Jan 27 20:25:57 crc kubenswrapper[4858]: E0127 20:25:57.405061 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs podName:f75129ba-73c8-4f91-99b0-42d191fb0510 nodeName:}" failed. No retries permitted until 2026-01-27 20:26:05.405051244 +0000 UTC m=+1110.112866940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs") pod "openstack-operator-controller-manager-86d6949bb8-k78rw" (UID: "f75129ba-73c8-4f91-99b0-42d191fb0510") : secret "metrics-server-cert" not found Jan 27 20:25:59 crc kubenswrapper[4858]: I0127 20:25:59.328298 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:25:59 crc kubenswrapper[4858]: I0127 20:25:59.328695 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:26:03 crc kubenswrapper[4858]: E0127 20:26:03.351477 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 27 20:26:03 crc kubenswrapper[4858]: E0127 20:26:03.352048 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8p6cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-54b92_openstack-operators(446c00be-b860-4220-bcc1-457005d92650): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:26:03 crc kubenswrapper[4858]: E0127 20:26:03.353558 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" podUID="446c00be-b860-4220-bcc1-457005d92650" Jan 27 20:26:03 crc kubenswrapper[4858]: E0127 20:26:03.595402 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" podUID="446c00be-b860-4220-bcc1-457005d92650" Jan 27 20:26:03 crc kubenswrapper[4858]: E0127 20:26:03.987676 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 27 20:26:03 crc kubenswrapper[4858]: E0127 20:26:03.987912 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xgwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-m4pbf_openstack-operators(6074b126-8795-48bc-8984-fc25402032a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:26:03 crc kubenswrapper[4858]: E0127 20:26:03.989827 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" podUID="6074b126-8795-48bc-8984-fc25402032a2" Jan 27 20:26:04 crc kubenswrapper[4858]: E0127 20:26:04.596314 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 27 20:26:04 crc kubenswrapper[4858]: E0127 20:26:04.596913 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rbpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-dxhnn_openstack-operators(74b2bb8d-cae5-4033-b999-73e3ed604cb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:26:04 crc kubenswrapper[4858]: E0127 20:26:04.598743 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" podUID="74b2bb8d-cae5-4033-b999-73e3ed604cb9" Jan 27 20:26:04 crc kubenswrapper[4858]: E0127 20:26:04.604742 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" podUID="6074b126-8795-48bc-8984-fc25402032a2" Jan 27 20:26:04 crc kubenswrapper[4858]: I0127 20:26:04.644635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:26:04 crc kubenswrapper[4858]: E0127 20:26:04.644860 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 20:26:04 crc kubenswrapper[4858]: E0127 20:26:04.644915 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert podName:f5527334-db65-4031-a24f-9aafcffb6708 nodeName:}" failed. No retries permitted until 2026-01-27 20:26:20.64489487 +0000 UTC m=+1125.352710576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert") pod "infra-operator-controller-manager-694cf4f878-tskvm" (UID: "f5527334-db65-4031-a24f-9aafcffb6708") : secret "infra-operator-webhook-server-cert" not found Jan 27 20:26:04 crc kubenswrapper[4858]: I0127 20:26:04.847348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:26:04 crc kubenswrapper[4858]: I0127 20:26:04.863111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c3bd5d36-c726-4b79-9c08-22bb23dabc28-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh\" (UID: \"c3bd5d36-c726-4b79-9c08-22bb23dabc28\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:26:04 crc kubenswrapper[4858]: I0127 20:26:04.995048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.258535 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.259326 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42gr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-sgtcz_openstack-operators(304980dc-cb07-41fa-ba11-1262d5a2b43b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.265203 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" podUID="304980dc-cb07-41fa-ba11-1262d5a2b43b" Jan 27 20:26:05 crc kubenswrapper[4858]: I0127 20:26:05.456949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:26:05 crc kubenswrapper[4858]: I0127 20:26:05.457066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:26:05 crc kubenswrapper[4858]: I0127 20:26:05.463247 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-webhook-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:26:05 crc kubenswrapper[4858]: I0127 20:26:05.464140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f75129ba-73c8-4f91-99b0-42d191fb0510-metrics-certs\") pod \"openstack-operator-controller-manager-86d6949bb8-k78rw\" (UID: \"f75129ba-73c8-4f91-99b0-42d191fb0510\") " pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:26:05 crc kubenswrapper[4858]: I0127 20:26:05.476131 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.612423 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" podUID="304980dc-cb07-41fa-ba11-1262d5a2b43b" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.613864 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" podUID="74b2bb8d-cae5-4033-b999-73e3ed604cb9" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.947857 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.948074 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-db6v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-dxfwr_openstack-operators(8a5eb91f-e957-4f9d-86c9-5f8905c6bee4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:26:05 crc kubenswrapper[4858]: E0127 20:26:05.951035 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" podUID="8a5eb91f-e957-4f9d-86c9-5f8905c6bee4" Jan 27 20:26:06 crc kubenswrapper[4858]: I0127 20:26:06.508786 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw"] Jan 27 20:26:06 crc kubenswrapper[4858]: I0127 20:26:06.621032 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh"] Jan 27 20:26:06 crc kubenswrapper[4858]: I0127 20:26:06.633602 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" event={"ID":"e86b137e-cd0c-4243-801f-dad4eb19373b","Type":"ContainerStarted","Data":"7acb4fcca55cefb9d3583c32d1e57404499ab7e44ed8b4975425213f9ba246c0"} Jan 27 20:26:06 crc kubenswrapper[4858]: I0127 20:26:06.633643 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" Jan 27 20:26:06 crc kubenswrapper[4858]: E0127 20:26:06.637987 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" podUID="8a5eb91f-e957-4f9d-86c9-5f8905c6bee4" Jan 27 20:26:06 crc kubenswrapper[4858]: I0127 20:26:06.668542 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" podStartSLOduration=3.109433562 podStartE2EDuration="18.668239225s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.388074877 +0000 UTC m=+1095.095890583" lastFinishedPulling="2026-01-27 20:26:05.94688054 +0000 UTC m=+1110.654696246" observedRunningTime="2026-01-27 20:26:06.662828715 +0000 UTC m=+1111.370644431" watchObservedRunningTime="2026-01-27 20:26:06.668239225 +0000 UTC m=+1111.376054931" Jan 27 20:26:07 crc kubenswrapper[4858]: W0127 20:26:07.973669 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75129ba_73c8_4f91_99b0_42d191fb0510.slice/crio-4841c8ca2a5ecec549693b32359a5246c659d9abd6fac711bf03aa72b50f13e7 WatchSource:0}: Error finding container 4841c8ca2a5ecec549693b32359a5246c659d9abd6fac711bf03aa72b50f13e7: Status 404 returned error can't find the container with id 4841c8ca2a5ecec549693b32359a5246c659d9abd6fac711bf03aa72b50f13e7 Jan 27 20:26:08 crc kubenswrapper[4858]: I0127 20:26:08.655656 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" event={"ID":"f75129ba-73c8-4f91-99b0-42d191fb0510","Type":"ContainerStarted","Data":"4841c8ca2a5ecec549693b32359a5246c659d9abd6fac711bf03aa72b50f13e7"} Jan 27 20:26:08 crc kubenswrapper[4858]: I0127 20:26:08.657319 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" event={"ID":"50605190-4834-4573-b8c9-70f5ca60b820","Type":"ContainerStarted","Data":"d19f0f1cab4142b382530ad18c27dde069f7f2ee22dfe15fcaa76582db77ed73"} Jan 27 20:26:08 crc kubenswrapper[4858]: I0127 20:26:08.658450 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" Jan 27 20:26:08 crc kubenswrapper[4858]: I0127 20:26:08.659715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" event={"ID":"c3bd5d36-c726-4b79-9c08-22bb23dabc28","Type":"ContainerStarted","Data":"b828bc45109cd27a32f8b2f4c5f6950a1257a343fbc85b76f658acea4ea39fed"} Jan 27 20:26:08 crc kubenswrapper[4858]: I0127 20:26:08.678420 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" podStartSLOduration=4.466043667 podStartE2EDuration="20.678398634s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:49.701474702 +0000 UTC m=+1094.409290408" lastFinishedPulling="2026-01-27 20:26:05.913829669 +0000 UTC m=+1110.621645375" observedRunningTime="2026-01-27 20:26:08.671921016 +0000 UTC m=+1113.379736732" watchObservedRunningTime="2026-01-27 20:26:08.678398634 +0000 UTC m=+1113.386214340" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.678477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" event={"ID":"eba796fd-f7a8-4f83-9a75-7036f77d73f1","Type":"ContainerStarted","Data":"5d69a8a1823b1f3d135ba62edd7580fce0e7786c9034f18a00f4f193c8270977"} Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.679092 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.710649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" event={"ID":"968ee010-0e16-462d-82d3-7c5d61f107a1","Type":"ContainerStarted","Data":"a6fdd8082ec2421f692fd823cb47a15f6667e601f1ed9b61bb0b2448e9202262"} Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.711460 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.715912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" event={"ID":"c15c4bec-780c-42d1-8f36-618b255a95f6","Type":"ContainerStarted","Data":"71fa037d2a04f54c2a9318544e65d9a1f99ffe6903fd4ca3d0fb51cf68a594be"} Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.716400 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.726230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" event={"ID":"f2bb693c-1d95-483e-b7c5-151516bd015e","Type":"ContainerStarted","Data":"f7d2e981e0ce904e9f54a44f993dc33c60f51ba06810946e551ed419fa564124"} Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.726383 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.736400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" event={"ID":"feb30e7d-db27-4e87-ba07-f4730b228588","Type":"ContainerStarted","Data":"5e9d98ac72a261b5fe3bbf56605d01953eb87b5d267204bf05b1694c85753d88"} Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.736454 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.739511 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" podStartSLOduration=6.173889329 podStartE2EDuration="21.739485724s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.400255838 +0000 UTC m=+1095.108071544" lastFinishedPulling="2026-01-27 20:26:05.965852233 +0000 UTC m=+1110.673667939" observedRunningTime="2026-01-27 20:26:09.736661006 +0000 UTC m=+1114.444476712" watchObservedRunningTime="2026-01-27 20:26:09.739485724 +0000 UTC m=+1114.447301430" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.774406 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" podStartSLOduration=6.618970393 podStartE2EDuration="21.774383266s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.817862345 +0000 UTC m=+1095.525678051" lastFinishedPulling="2026-01-27 20:26:05.973275218 +0000 UTC m=+1110.681090924" observedRunningTime="2026-01-27 20:26:09.77018032 +0000 UTC m=+1114.477996046" watchObservedRunningTime="2026-01-27 20:26:09.774383266 +0000 UTC m=+1114.482198982" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.797322 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" podStartSLOduration=6.188199071 podStartE2EDuration="21.797298057s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.35032045 +0000 UTC m=+1095.058136156" lastFinishedPulling="2026-01-27 20:26:05.959419436 +0000 UTC m=+1110.667235142" observedRunningTime="2026-01-27 20:26:09.795873028 +0000 UTC m=+1114.503688754" watchObservedRunningTime="2026-01-27 20:26:09.797298057 +0000 UTC m=+1114.505113773" Jan 27 20:26:09 crc kubenswrapper[4858]: I0127 20:26:09.846019 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" podStartSLOduration=6.687262173 podStartE2EDuration="21.845989249s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.813857663 +0000 UTC m=+1095.521673369" lastFinishedPulling="2026-01-27 20:26:05.972584739 +0000 UTC m=+1110.680400445" observedRunningTime="2026-01-27 20:26:09.8260594 +0000 UTC m=+1114.533875116" watchObservedRunningTime="2026-01-27 20:26:09.845989249 +0000 UTC m=+1114.553804955" Jan 27 20:26:11 crc kubenswrapper[4858]: I0127 20:26:11.752935 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" event={"ID":"fd5e8600-d46a-4463-b592-f6d6025bf66f","Type":"ContainerStarted","Data":"a13635f311ca2975bffe185ed64fc4c1c6e5cd3696745d9f325e22c6255d6fd8"} Jan 27 20:26:11 crc kubenswrapper[4858]: I0127 20:26:11.776842 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" podStartSLOduration=7.545920482 podStartE2EDuration="23.776805422s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:49.718678254 +0000 UTC m=+1094.426493960" lastFinishedPulling="2026-01-27 20:26:05.949563194 +0000 UTC m=+1110.657378900" observedRunningTime="2026-01-27 20:26:11.771470046 +0000 UTC m=+1116.479285752" watchObservedRunningTime="2026-01-27 20:26:11.776805422 +0000 UTC m=+1116.484621128" Jan 27 20:26:11 crc kubenswrapper[4858]: I0127 20:26:11.777207 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" podStartSLOduration=7.98428712 podStartE2EDuration="23.777201613s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.157986648 +0000 UTC m=+1094.865802354" lastFinishedPulling="2026-01-27 20:26:05.950901141 +0000 UTC m=+1110.658716847" observedRunningTime="2026-01-27 20:26:09.860925221 +0000 UTC m=+1114.568740927" watchObservedRunningTime="2026-01-27 20:26:11.777201613 +0000 UTC m=+1116.485017319" Jan 27 20:26:12 crc kubenswrapper[4858]: I0127 20:26:12.760002 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" Jan 27 20:26:13 crc kubenswrapper[4858]: I0127 20:26:13.768510 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" event={"ID":"f75129ba-73c8-4f91-99b0-42d191fb0510","Type":"ContainerStarted","Data":"570550113f473a84c2d3139a7840592f84f83910f0d0ad527216ec911777209b"} Jan 27 20:26:13 crc kubenswrapper[4858]: I0127 20:26:13.768598 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:26:13 crc kubenswrapper[4858]: I0127 20:26:13.807191 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" podStartSLOduration=24.80717116 podStartE2EDuration="24.80717116s" podCreationTimestamp="2026-01-27 20:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:26:13.79918693 +0000 UTC m=+1118.507002646" watchObservedRunningTime="2026-01-27 20:26:13.80717116 +0000 UTC m=+1118.514986876" Jan 27 20:26:14 crc kubenswrapper[4858]: I0127 20:26:14.783174 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" event={"ID":"b778d97d-e9dc-4017-94ff-9cfd82322a3a","Type":"ContainerStarted","Data":"2e7d35eea51f1bfa46833df2610d779af893f2756811f2bfdf3e06e498e9cc4b"} Jan 27 20:26:14 crc kubenswrapper[4858]: I0127 20:26:14.808646 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" podStartSLOduration=11.678244243 podStartE2EDuration="26.808628385s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.819220613 +0000 UTC m=+1095.527036319" lastFinishedPulling="2026-01-27 20:26:05.949604765 +0000 UTC m=+1110.657420461" observedRunningTime="2026-01-27 20:26:14.806487586 +0000 UTC m=+1119.514303302" watchObservedRunningTime="2026-01-27 20:26:14.808628385 +0000 UTC m=+1119.516444091" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.805095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" event={"ID":"112cff1f-1841-4fe8-96e2-95d2be2957a2","Type":"ContainerStarted","Data":"0475c1da46da0099b9f6affe02c1c340c685e0267c7d6266e6ba555bc055d8de"} Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.805758 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.808121 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" event={"ID":"9e4c347f-b102-40c1-8935-77fdef528d14","Type":"ContainerStarted","Data":"de99fcfe020fccc5fd35d81e4e210e6a4670e63a60e4230d33600b575258298b"} Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.809807 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" event={"ID":"7de86ff1-90b3-470b-bab1-344555db1153","Type":"ContainerStarted","Data":"98e20f3e005997206ce8e13516b5305db8e70b31940d7859fbd821d2484ae633"} Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.810843 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.814382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" event={"ID":"314a20ef-a97b-40a6-8a85-b118e64d9a3a","Type":"ContainerStarted","Data":"0d9bb0e1f1982f2f95da3d439c0d27b34ebd75f1efbd87c0bf03b530c50b5f70"} Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.814577 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.815878 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" event={"ID":"397758a8-62c2-41ba-8177-5309d797bb2f","Type":"ContainerStarted","Data":"a6c681b68afa09cd43499b33c3136c06c6b1f9d14e05c4e4aa06940556f43cca"} Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.816374 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.819412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" event={"ID":"c4dfc413-8d91-4a08-aef6-47188c0971c4","Type":"ContainerStarted","Data":"4ee8d97a88c66cc2c8a7067bc84b7a1739e15ba46102a5f0b8d1e314f9b90a0e"} Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.820685 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.823080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" event={"ID":"c3bd5d36-c726-4b79-9c08-22bb23dabc28","Type":"ContainerStarted","Data":"1760cbcef00eb44c3f0347bb30bde0fe99bf70cd5ee3de949947d862013251a8"} Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.823713 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.823927 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.832486 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" podStartSLOduration=3.291976384 podStartE2EDuration="26.832446566s" podCreationTimestamp="2026-01-27 20:25:49 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.877455063 +0000 UTC m=+1095.585270769" lastFinishedPulling="2026-01-27 20:26:14.417925245 +0000 UTC m=+1119.125740951" observedRunningTime="2026-01-27 20:26:15.822601335 +0000 UTC m=+1120.530417101" watchObservedRunningTime="2026-01-27 20:26:15.832446566 +0000 UTC m=+1120.540262272" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.846028 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" podStartSLOduration=12.274286028 podStartE2EDuration="27.845995359s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.408003724 +0000 UTC m=+1095.115819430" lastFinishedPulling="2026-01-27 20:26:05.979713055 +0000 UTC m=+1110.687528761" observedRunningTime="2026-01-27 20:26:15.841835945 +0000 UTC m=+1120.549651671" watchObservedRunningTime="2026-01-27 20:26:15.845995359 +0000 UTC m=+1120.553811065" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.863405 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" podStartSLOduration=7.471539601 podStartE2EDuration="27.863374739s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.859645144 +0000 UTC m=+1095.567460850" lastFinishedPulling="2026-01-27 20:26:11.251480282 +0000 UTC m=+1115.959295988" observedRunningTime="2026-01-27 20:26:15.85543504 +0000 UTC m=+1120.563250756" watchObservedRunningTime="2026-01-27 20:26:15.863374739 +0000 UTC m=+1120.571190445" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.876883 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" podStartSLOduration=7.511990426 podStartE2EDuration="27.87686284s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.886512396 +0000 UTC m=+1095.594328102" lastFinishedPulling="2026-01-27 20:26:11.25138482 +0000 UTC m=+1115.959200516" observedRunningTime="2026-01-27 20:26:15.869790135 +0000 UTC m=+1120.577605871" watchObservedRunningTime="2026-01-27 20:26:15.87686284 +0000 UTC m=+1120.584678546" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.898990 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tc2j8" podStartSLOduration=3.258211377 podStartE2EDuration="26.89896192s" podCreationTimestamp="2026-01-27 20:25:49 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.864729736 +0000 UTC m=+1095.572545442" lastFinishedPulling="2026-01-27 20:26:14.505480279 +0000 UTC m=+1119.213295985" observedRunningTime="2026-01-27 20:26:15.886533827 +0000 UTC m=+1120.594349543" watchObservedRunningTime="2026-01-27 20:26:15.89896192 +0000 UTC m=+1120.606777636" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.948159 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" podStartSLOduration=4.423173857 podStartE2EDuration="27.948135125s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.870952671 +0000 UTC m=+1095.578768367" lastFinishedPulling="2026-01-27 20:26:14.395913809 +0000 UTC m=+1119.103729635" observedRunningTime="2026-01-27 20:26:15.946719956 +0000 UTC m=+1120.654535682" watchObservedRunningTime="2026-01-27 20:26:15.948135125 +0000 UTC m=+1120.655950831" Jan 27 20:26:15 crc kubenswrapper[4858]: I0127 20:26:15.980082 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" podStartSLOduration=21.489689417 podStartE2EDuration="27.980060105s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:26:07.965516794 +0000 UTC m=+1112.673332500" lastFinishedPulling="2026-01-27 20:26:14.455887482 +0000 UTC m=+1119.163703188" observedRunningTime="2026-01-27 20:26:15.970860521 +0000 UTC m=+1120.678676227" watchObservedRunningTime="2026-01-27 20:26:15.980060105 +0000 UTC m=+1120.687875811" Jan 27 20:26:18 crc kubenswrapper[4858]: I0127 20:26:18.756944 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pl99n" Jan 27 20:26:18 crc kubenswrapper[4858]: I0127 20:26:18.768626 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-6tsd5" Jan 27 20:26:18 crc kubenswrapper[4858]: I0127 20:26:18.792758 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qlssw" Jan 27 20:26:18 crc kubenswrapper[4858]: I0127 20:26:18.945370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-m6lz4" Jan 27 20:26:18 crc kubenswrapper[4858]: I0127 20:26:18.995089 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6rrnl" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.040066 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-k69nl" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.142366 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-hg2t5" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.195448 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cp6lt" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.213447 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-cnhqv" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.284790 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-f7gwl" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.359111 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-jrl5h" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.456044 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-8st2f" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.635735 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-7w8hk" Jan 27 20:26:19 crc kubenswrapper[4858]: I0127 20:26:19.868851 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5975f685d8-snnk5" Jan 27 20:26:20 crc kubenswrapper[4858]: I0127 20:26:20.729235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:26:20 crc kubenswrapper[4858]: I0127 20:26:20.740919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5527334-db65-4031-a24f-9aafcffb6708-cert\") pod \"infra-operator-controller-manager-694cf4f878-tskvm\" (UID: \"f5527334-db65-4031-a24f-9aafcffb6708\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:26:20 crc kubenswrapper[4858]: I0127 20:26:20.776767 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:26:21 crc kubenswrapper[4858]: I0127 20:26:21.244087 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm"] Jan 27 20:26:21 crc kubenswrapper[4858]: W0127 20:26:21.248182 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5527334_db65_4031_a24f_9aafcffb6708.slice/crio-2544e147a769d5f5ca21497f1e85d86cd24cd2026e3c254de36d3fb63023396b WatchSource:0}: Error finding container 2544e147a769d5f5ca21497f1e85d86cd24cd2026e3c254de36d3fb63023396b: Status 404 returned error can't find the container with id 2544e147a769d5f5ca21497f1e85d86cd24cd2026e3c254de36d3fb63023396b Jan 27 20:26:21 crc kubenswrapper[4858]: I0127 20:26:21.872371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" event={"ID":"f5527334-db65-4031-a24f-9aafcffb6708","Type":"ContainerStarted","Data":"2544e147a769d5f5ca21497f1e85d86cd24cd2026e3c254de36d3fb63023396b"} Jan 27 20:26:25 crc kubenswrapper[4858]: I0127 20:26:25.001459 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh" Jan 27 20:26:25 crc kubenswrapper[4858]: I0127 20:26:25.487742 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86d6949bb8-k78rw" Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.959807 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" event={"ID":"6074b126-8795-48bc-8984-fc25402032a2","Type":"ContainerStarted","Data":"cf2e3ab9fad0378932474547fbe73ae8088e5a4cdbc21cc4a5c286e81ca54063"} Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.961361 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.963729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" event={"ID":"446c00be-b860-4220-bcc1-457005d92650","Type":"ContainerStarted","Data":"79a606ca691ecb6ccb6d814b7aa23923d121da7dd6eac301102133642487706f"} Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.963950 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.965851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" event={"ID":"304980dc-cb07-41fa-ba11-1262d5a2b43b","Type":"ContainerStarted","Data":"25b54a6caf42e68372edd97aceca25b5acfc3eba6498fe1e3ee967aa95eb7abd"} Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.966035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.968173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" event={"ID":"8a5eb91f-e957-4f9d-86c9-5f8905c6bee4","Type":"ContainerStarted","Data":"ad694b2fc7ddbb774b38cbbeb1c64a0e9afe08da78c2c49cc01fac4e46a92ab9"} Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.968336 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.970301 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" event={"ID":"74b2bb8d-cae5-4033-b999-73e3ed604cb9","Type":"ContainerStarted","Data":"73c4936ce153fe1de6d0b695042276bace807feb32372192efef71f4f3fc7ae4"} Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.970681 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" Jan 27 20:26:28 crc kubenswrapper[4858]: I0127 20:26:28.985578 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" podStartSLOduration=3.549624989 podStartE2EDuration="40.985534223s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.395825514 +0000 UTC m=+1095.103641220" lastFinishedPulling="2026-01-27 20:26:27.831734748 +0000 UTC m=+1132.539550454" observedRunningTime="2026-01-27 20:26:28.977505401 +0000 UTC m=+1133.685321127" watchObservedRunningTime="2026-01-27 20:26:28.985534223 +0000 UTC m=+1133.693349919" Jan 27 20:26:29 crc kubenswrapper[4858]: I0127 20:26:29.010650 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" podStartSLOduration=3.5630301749999997 podStartE2EDuration="41.010617844s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.388038786 +0000 UTC m=+1095.095854492" lastFinishedPulling="2026-01-27 20:26:27.835626455 +0000 UTC m=+1132.543442161" observedRunningTime="2026-01-27 20:26:29.003715684 +0000 UTC m=+1133.711531400" watchObservedRunningTime="2026-01-27 20:26:29.010617844 +0000 UTC m=+1133.718433550" Jan 27 20:26:29 crc kubenswrapper[4858]: I0127 20:26:29.023110 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" podStartSLOduration=4.04903867 podStartE2EDuration="41.023078258s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.858310667 +0000 UTC m=+1095.566126373" lastFinishedPulling="2026-01-27 20:26:27.832350245 +0000 UTC m=+1132.540165961" observedRunningTime="2026-01-27 20:26:29.022182393 +0000 UTC m=+1133.729998119" watchObservedRunningTime="2026-01-27 20:26:29.023078258 +0000 UTC m=+1133.730893964" Jan 27 20:26:29 crc kubenswrapper[4858]: I0127 20:26:29.049080 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" podStartSLOduration=4.053736301 podStartE2EDuration="41.049046714s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.83733341 +0000 UTC m=+1095.545149126" lastFinishedPulling="2026-01-27 20:26:27.832643833 +0000 UTC m=+1132.540459539" observedRunningTime="2026-01-27 20:26:29.047005507 +0000 UTC m=+1133.754821223" watchObservedRunningTime="2026-01-27 20:26:29.049046714 +0000 UTC m=+1133.756862420" Jan 27 20:26:29 crc kubenswrapper[4858]: I0127 20:26:29.065323 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" podStartSLOduration=4.066379176 podStartE2EDuration="41.065293051s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:25:50.832929176 +0000 UTC m=+1095.540744882" lastFinishedPulling="2026-01-27 20:26:27.831843051 +0000 UTC m=+1132.539658757" observedRunningTime="2026-01-27 20:26:29.0623318 +0000 UTC m=+1133.770147516" watchObservedRunningTime="2026-01-27 20:26:29.065293051 +0000 UTC m=+1133.773108757" Jan 27 20:26:29 crc kubenswrapper[4858]: I0127 20:26:29.329406 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:26:29 crc kubenswrapper[4858]: I0127 20:26:29.329489 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:26:29 crc kubenswrapper[4858]: I0127 20:26:29.981836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" event={"ID":"f5527334-db65-4031-a24f-9aafcffb6708","Type":"ContainerStarted","Data":"b1ef3253e7293d8728d41a237b127200bd1d6759b7419ae03612bb18e895a9ec"} Jan 27 20:26:30 crc kubenswrapper[4858]: I0127 20:26:30.004519 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" podStartSLOduration=33.567188736 podStartE2EDuration="42.00449442s" podCreationTimestamp="2026-01-27 20:25:48 +0000 UTC" firstStartedPulling="2026-01-27 20:26:21.250967549 +0000 UTC m=+1125.958783255" lastFinishedPulling="2026-01-27 20:26:29.688273233 +0000 UTC m=+1134.396088939" observedRunningTime="2026-01-27 20:26:29.999283556 +0000 UTC m=+1134.707099262" watchObservedRunningTime="2026-01-27 20:26:30.00449442 +0000 UTC m=+1134.712310126" Jan 27 20:26:30 crc kubenswrapper[4858]: I0127 20:26:30.777456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:26:38 crc kubenswrapper[4858]: I0127 20:26:38.894425 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-m4pbf" Jan 27 20:26:39 crc kubenswrapper[4858]: I0127 20:26:39.107361 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-dxfwr" Jan 27 20:26:39 crc kubenswrapper[4858]: I0127 20:26:39.187037 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-54b92" Jan 27 20:26:39 crc kubenswrapper[4858]: I0127 20:26:39.284023 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dxhnn" Jan 27 20:26:39 crc kubenswrapper[4858]: I0127 20:26:39.520477 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-sgtcz" Jan 27 20:26:40 crc kubenswrapper[4858]: I0127 20:26:40.785024 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-tskvm" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.481875 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.482843 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.482912 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.483723 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"955bc619bd742d004863858dd5a8f86f78a2f164e013b906e4efa16975027e52"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.483782 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://955bc619bd742d004863858dd5a8f86f78a2f164e013b906e4efa16975027e52" gracePeriod=600 Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.565278 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b55fdf8d9-s86qf"] Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.567222 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.571125 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.571608 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.571796 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.571933 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rhkqx" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.571956 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.583632 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b55fdf8d9-s86qf"] Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.584690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-config\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.584835 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfg2j\" (UniqueName: \"kubernetes.io/projected/99ac46f1-1c5d-465f-9947-651f842ce4a3-kube-api-access-vfg2j\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.584899 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-dns-svc\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.691313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-config\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.691429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfg2j\" (UniqueName: \"kubernetes.io/projected/99ac46f1-1c5d-465f-9947-651f842ce4a3-kube-api-access-vfg2j\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.691483 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-dns-svc\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.692626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-config\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.692721 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-dns-svc\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.718711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfg2j\" (UniqueName: \"kubernetes.io/projected/99ac46f1-1c5d-465f-9947-651f842ce4a3-kube-api-access-vfg2j\") pod \"dnsmasq-dns-7b55fdf8d9-s86qf\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:26:59 crc kubenswrapper[4858]: I0127 20:26:59.896721 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:27:00 crc kubenswrapper[4858]: I0127 20:27:00.261201 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="955bc619bd742d004863858dd5a8f86f78a2f164e013b906e4efa16975027e52" exitCode=0 Jan 27 20:27:00 crc kubenswrapper[4858]: I0127 20:27:00.261275 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"955bc619bd742d004863858dd5a8f86f78a2f164e013b906e4efa16975027e52"} Jan 27 20:27:00 crc kubenswrapper[4858]: I0127 20:27:00.262152 4858 scope.go:117] "RemoveContainer" containerID="134de6cefdf9618660f3288534217e176eacedd779a7557a8425c203f6c864ec" Jan 27 20:27:00 crc kubenswrapper[4858]: W0127 20:27:00.399747 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ac46f1_1c5d_465f_9947_651f842ce4a3.slice/crio-c6d3ac221be3cbed54ae98ef049916257a82b4b17ff094c80971597e6811e936 WatchSource:0}: Error finding container c6d3ac221be3cbed54ae98ef049916257a82b4b17ff094c80971597e6811e936: Status 404 returned error can't find the container with id c6d3ac221be3cbed54ae98ef049916257a82b4b17ff094c80971597e6811e936 Jan 27 20:27:00 crc kubenswrapper[4858]: I0127 20:27:00.400872 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b55fdf8d9-s86qf"] Jan 27 20:27:01 crc kubenswrapper[4858]: I0127 20:27:01.277692 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"96d22823cb85c08d62e23a9b28d554ba658642fe85f23bff7568ba66ed62f3ed"} Jan 27 20:27:01 crc kubenswrapper[4858]: I0127 20:27:01.279492 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" event={"ID":"99ac46f1-1c5d-465f-9947-651f842ce4a3","Type":"ContainerStarted","Data":"c6d3ac221be3cbed54ae98ef049916257a82b4b17ff094c80971597e6811e936"} Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.207460 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85bf9695fc-8jtxr"] Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.211696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.239319 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bf9695fc-8jtxr"] Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.265461 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-dns-svc\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.265543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxjxp\" (UniqueName: \"kubernetes.io/projected/298abc4c-aae8-461b-9742-7349f71de55f-kube-api-access-mxjxp\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.265987 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-config\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.367785 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-config\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.368105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-dns-svc\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.368185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxjxp\" (UniqueName: \"kubernetes.io/projected/298abc4c-aae8-461b-9742-7349f71de55f-kube-api-access-mxjxp\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.369739 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-config\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.370316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-dns-svc\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.416273 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxjxp\" (UniqueName: \"kubernetes.io/projected/298abc4c-aae8-461b-9742-7349f71de55f-kube-api-access-mxjxp\") pod \"dnsmasq-dns-85bf9695fc-8jtxr\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.537027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.583927 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b55fdf8d9-s86qf"] Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.610605 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bc5cbcd7-8ckk8"] Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.612199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.626219 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bc5cbcd7-8ckk8"] Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.673407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-dns-svc\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.673579 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfr5\" (UniqueName: \"kubernetes.io/projected/5c43b9dc-5f8d-414d-8e09-47c33607ce47-kube-api-access-qzfr5\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.673619 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-config\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.778583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfr5\" (UniqueName: \"kubernetes.io/projected/5c43b9dc-5f8d-414d-8e09-47c33607ce47-kube-api-access-qzfr5\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.778659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-config\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.778711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-dns-svc\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.779784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-dns-svc\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.779839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-config\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.819776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfr5\" (UniqueName: \"kubernetes.io/projected/5c43b9dc-5f8d-414d-8e09-47c33607ce47-kube-api-access-qzfr5\") pod \"dnsmasq-dns-57bc5cbcd7-8ckk8\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.926160 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bc5cbcd7-8ckk8"] Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.937115 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.966837 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-767476ffd5-svc5x"] Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.973703 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:03 crc kubenswrapper[4858]: I0127 20:27:03.981482 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767476ffd5-svc5x"] Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.090116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-config\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.090164 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fs8\" (UniqueName: \"kubernetes.io/projected/985b6324-1458-4ed6-9aaa-019ce90601f9-kube-api-access-l2fs8\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.090187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-dns-svc\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.192349 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-dns-svc\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.192609 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-config\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.192637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2fs8\" (UniqueName: \"kubernetes.io/projected/985b6324-1458-4ed6-9aaa-019ce90601f9-kube-api-access-l2fs8\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.193695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-dns-svc\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.194913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-config\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.214995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2fs8\" (UniqueName: \"kubernetes.io/projected/985b6324-1458-4ed6-9aaa-019ce90601f9-kube-api-access-l2fs8\") pod \"dnsmasq-dns-767476ffd5-svc5x\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.304141 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.416067 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.417886 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.424302 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.424354 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.424378 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.424542 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.424749 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.424773 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.429370 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-m692p" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.440823 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613494 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613649 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9h4j\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-kube-api-access-b9h4j\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613747 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613792 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613833 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-config-data\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.613916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.714944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9h4j\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-kube-api-access-b9h4j\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715171 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-config-data\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715212 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.715705 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.716223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.717878 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.718000 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-config-data\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.718178 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.718383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.721150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.726654 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.733005 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.733860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.738498 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9h4j\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-kube-api-access-b9h4j\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.742333 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.745272 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.748887 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.752216 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.752483 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.752759 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.753104 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8hkvj" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.753302 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.753427 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.753609 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.757324 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " pod="openstack/rabbitmq-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.918939 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919034 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919059 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919095 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad881410-229a-4427-862b-8febd0e5ab61-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919201 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffbht\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-kube-api-access-ffbht\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919249 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919311 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:04 crc kubenswrapper[4858]: I0127 20:27:04.919357 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad881410-229a-4427-862b-8febd0e5ab61-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad881410-229a-4427-862b-8febd0e5ab61-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffbht\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-kube-api-access-ffbht\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020917 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020943 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.020968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad881410-229a-4427-862b-8febd0e5ab61-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.021074 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.021786 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.021930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.022392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.022875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.023363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.030831 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.035492 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad881410-229a-4427-862b-8febd0e5ab61-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.036618 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad881410-229a-4427-862b-8febd0e5ab61-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.036689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.048446 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffbht\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-kube-api-access-ffbht\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.050010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.054122 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.078912 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.081134 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.083429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.083634 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.083954 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.084212 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-gdrlh" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.084939 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.085116 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.085723 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.093214 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.108653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.226217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228414 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c539609-6c9e-46bc-a0d7-6a629e83ce17-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228613 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh8mw\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-kube-api-access-qh8mw\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228775 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c539609-6c9e-46bc-a0d7-6a629e83ce17-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.228960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.229165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c539609-6c9e-46bc-a0d7-6a629e83ce17-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331581 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c539609-6c9e-46bc-a0d7-6a629e83ce17-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.331626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh8mw\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-kube-api-access-qh8mw\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.332668 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.332702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.333268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.333523 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.333542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.337519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c539609-6c9e-46bc-a0d7-6a629e83ce17-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.341228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c539609-6c9e-46bc-a0d7-6a629e83ce17-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.350402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.350440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.352929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh8mw\" (UniqueName: \"kubernetes.io/projected/6c539609-6c9e-46bc-a0d7-6a629e83ce17-kube-api-access-qh8mw\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.362184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c539609-6c9e-46bc-a0d7-6a629e83ce17-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.363053 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c539609-6c9e-46bc-a0d7-6a629e83ce17\") " pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:05 crc kubenswrapper[4858]: I0127 20:27:05.433635 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.306468 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.308861 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.313921 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.314044 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5brch" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.314264 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.317174 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.318434 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.320679 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.450710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8985\" (UniqueName: \"kubernetes.io/projected/4768f41e-8ff0-4cec-b741-75f8902eb0e8-kube-api-access-k8985\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.450793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-config-data-default\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.451022 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.451078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4768f41e-8ff0-4cec-b741-75f8902eb0e8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.451144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-kolla-config\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.451195 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.451231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4768f41e-8ff0-4cec-b741-75f8902eb0e8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.451315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4768f41e-8ff0-4cec-b741-75f8902eb0e8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553388 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4768f41e-8ff0-4cec-b741-75f8902eb0e8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553469 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-kolla-config\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553494 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4768f41e-8ff0-4cec-b741-75f8902eb0e8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4768f41e-8ff0-4cec-b741-75f8902eb0e8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8985\" (UniqueName: \"kubernetes.io/projected/4768f41e-8ff0-4cec-b741-75f8902eb0e8-kube-api-access-k8985\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.553646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-config-data-default\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.554762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-config-data-default\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.555034 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.557204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.558788 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4768f41e-8ff0-4cec-b741-75f8902eb0e8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.559721 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4768f41e-8ff0-4cec-b741-75f8902eb0e8-kolla-config\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.563828 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4768f41e-8ff0-4cec-b741-75f8902eb0e8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.564337 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4768f41e-8ff0-4cec-b741-75f8902eb0e8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.578879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.595519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8985\" (UniqueName: \"kubernetes.io/projected/4768f41e-8ff0-4cec-b741-75f8902eb0e8-kube-api-access-k8985\") pod \"openstack-galera-0\" (UID: \"4768f41e-8ff0-4cec-b741-75f8902eb0e8\") " pod="openstack/openstack-galera-0" Jan 27 20:27:06 crc kubenswrapper[4858]: I0127 20:27:06.658306 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.713593 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.719437 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.722390 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.730115 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-j6292" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.730224 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.736926 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.748219 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.793533 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.793853 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.793933 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f223cd-763c-408e-a3cf-067af57416af-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.794006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.794078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f223cd-763c-408e-a3cf-067af57416af-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.794161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxqc\" (UniqueName: \"kubernetes.io/projected/f7f223cd-763c-408e-a3cf-067af57416af-kube-api-access-flxqc\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.794255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7f223cd-763c-408e-a3cf-067af57416af-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.794416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f223cd-763c-408e-a3cf-067af57416af-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f223cd-763c-408e-a3cf-067af57416af-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896532 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxqc\" (UniqueName: \"kubernetes.io/projected/f7f223cd-763c-408e-a3cf-067af57416af-kube-api-access-flxqc\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.896582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7f223cd-763c-408e-a3cf-067af57416af-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.897130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f7f223cd-763c-408e-a3cf-067af57416af-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.901967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.902209 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.905269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.913563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f223cd-763c-408e-a3cf-067af57416af-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.929057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f223cd-763c-408e-a3cf-067af57416af-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.930760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f223cd-763c-408e-a3cf-067af57416af-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.962414 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.963820 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.984033 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.990690 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.994843 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 20:27:07 crc kubenswrapper[4858]: I0127 20:27:07.995668 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rhgq8" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.006119 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.016177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxqc\" (UniqueName: \"kubernetes.io/projected/f7f223cd-763c-408e-a3cf-067af57416af-kube-api-access-flxqc\") pod \"openstack-cell1-galera-0\" (UID: \"f7f223cd-763c-408e-a3cf-067af57416af\") " pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.114806 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.117832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2f70df-955b-44ba-a1be-a2f9d06a862c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.117876 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2f70df-955b-44ba-a1be-a2f9d06a862c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.119237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd2f70df-955b-44ba-a1be-a2f9d06a862c-config-data\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.119276 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd2f70df-955b-44ba-a1be-a2f9d06a862c-kolla-config\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.119316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf6sc\" (UniqueName: \"kubernetes.io/projected/bd2f70df-955b-44ba-a1be-a2f9d06a862c-kube-api-access-tf6sc\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.220514 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd2f70df-955b-44ba-a1be-a2f9d06a862c-config-data\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.220597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd2f70df-955b-44ba-a1be-a2f9d06a862c-kolla-config\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.220648 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf6sc\" (UniqueName: \"kubernetes.io/projected/bd2f70df-955b-44ba-a1be-a2f9d06a862c-kube-api-access-tf6sc\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.220696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2f70df-955b-44ba-a1be-a2f9d06a862c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.220729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2f70df-955b-44ba-a1be-a2f9d06a862c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.222025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bd2f70df-955b-44ba-a1be-a2f9d06a862c-kolla-config\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.222634 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd2f70df-955b-44ba-a1be-a2f9d06a862c-config-data\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.231421 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd2f70df-955b-44ba-a1be-a2f9d06a862c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.238234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd2f70df-955b-44ba-a1be-a2f9d06a862c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.248082 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf6sc\" (UniqueName: \"kubernetes.io/projected/bd2f70df-955b-44ba-a1be-a2f9d06a862c-kube-api-access-tf6sc\") pod \"memcached-0\" (UID: \"bd2f70df-955b-44ba-a1be-a2f9d06a862c\") " pod="openstack/memcached-0" Jan 27 20:27:08 crc kubenswrapper[4858]: I0127 20:27:08.358066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 20:27:09 crc kubenswrapper[4858]: I0127 20:27:09.796838 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:27:09 crc kubenswrapper[4858]: I0127 20:27:09.798582 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 20:27:09 crc kubenswrapper[4858]: I0127 20:27:09.800980 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-8ffhr" Jan 27 20:27:09 crc kubenswrapper[4858]: I0127 20:27:09.822362 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:27:09 crc kubenswrapper[4858]: I0127 20:27:09.970071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mtr\" (UniqueName: \"kubernetes.io/projected/d2c5e060-865d-405e-937d-1450a1928f49-kube-api-access-v6mtr\") pod \"kube-state-metrics-0\" (UID: \"d2c5e060-865d-405e-937d-1450a1928f49\") " pod="openstack/kube-state-metrics-0" Jan 27 20:27:10 crc kubenswrapper[4858]: I0127 20:27:10.071424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6mtr\" (UniqueName: \"kubernetes.io/projected/d2c5e060-865d-405e-937d-1450a1928f49-kube-api-access-v6mtr\") pod \"kube-state-metrics-0\" (UID: \"d2c5e060-865d-405e-937d-1450a1928f49\") " pod="openstack/kube-state-metrics-0" Jan 27 20:27:10 crc kubenswrapper[4858]: I0127 20:27:10.100205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6mtr\" (UniqueName: \"kubernetes.io/projected/d2c5e060-865d-405e-937d-1450a1928f49-kube-api-access-v6mtr\") pod \"kube-state-metrics-0\" (UID: \"d2c5e060-865d-405e-937d-1450a1928f49\") " pod="openstack/kube-state-metrics-0" Jan 27 20:27:10 crc kubenswrapper[4858]: I0127 20:27:10.126125 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.160270 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.163384 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.166751 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.167101 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pdrbd" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.167313 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.169410 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.169726 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.170089 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.170274 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.188167 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.205964 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.309718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.309779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.309924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.309982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.310129 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.310192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.310246 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qw6\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-kube-api-access-77qw6\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.310328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.310363 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.310389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.411897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.411964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412091 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412126 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412159 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qw6\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-kube-api-access-77qw6\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.412947 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.413427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.414012 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.421307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.421729 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.421817 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.421882 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/803e592a1a81ac6dfdcc5cad0d3e656e83a32ab2cebbc52f70d41fc2b9c7180d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.422004 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.423540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.426770 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.431363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qw6\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-kube-api-access-77qw6\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.476049 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:11 crc kubenswrapper[4858]: I0127 20:27:11.511207 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.713924 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jc5cc"] Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.715500 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.717956 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-v9hst" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.718799 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.718863 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.745725 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-vhbc7"] Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.748138 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.755213 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jc5cc"] Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.766123 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vhbc7"] Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlss\" (UniqueName: \"kubernetes.io/projected/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-kube-api-access-qtlss\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861126 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-log\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-run\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-run-ovn\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-etc-ovs\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861289 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-run\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861356 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cffbd\" (UniqueName: \"kubernetes.io/projected/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-kube-api-access-cffbd\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861381 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-scripts\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-combined-ca-bundle\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-ovn-controller-tls-certs\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861474 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-log-ovn\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-scripts\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.861519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-lib\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtlss\" (UniqueName: \"kubernetes.io/projected/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-kube-api-access-qtlss\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-log\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-run\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-run-ovn\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-etc-ovs\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-run\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963685 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cffbd\" (UniqueName: \"kubernetes.io/projected/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-kube-api-access-cffbd\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-scripts\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-combined-ca-bundle\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-ovn-controller-tls-certs\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-log-ovn\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963821 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-scripts\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.963837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-lib\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.964314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-log\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.964439 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-lib\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.964454 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-etc-ovs\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.964548 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-var-run\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.964561 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-run\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.964879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-run-ovn\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.964996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-var-log-ovn\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.967747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-scripts\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.968126 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-scripts\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.981072 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-ovn-controller-tls-certs\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.984600 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cffbd\" (UniqueName: \"kubernetes.io/projected/b28f0be1-aa4f-445d-95c3-1abd84b9c82a-kube-api-access-cffbd\") pod \"ovn-controller-ovs-vhbc7\" (UID: \"b28f0be1-aa4f-445d-95c3-1abd84b9c82a\") " pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.986128 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-combined-ca-bundle\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:13 crc kubenswrapper[4858]: I0127 20:27:13.988873 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtlss\" (UniqueName: \"kubernetes.io/projected/d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa-kube-api-access-qtlss\") pod \"ovn-controller-jc5cc\" (UID: \"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa\") " pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.035291 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.065497 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.581401 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.582997 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.586088 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4gk8c" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.586187 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.586890 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.587190 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.590230 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.609655 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.679262 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.679584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.679708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b049e044-9171-4011-9c90-c334fa955321-config\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.680016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.680275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkxgw\" (UniqueName: \"kubernetes.io/projected/b049e044-9171-4011-9c90-c334fa955321-kube-api-access-vkxgw\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.680398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b049e044-9171-4011-9c90-c334fa955321-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.680492 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.680615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b049e044-9171-4011-9c90-c334fa955321-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782554 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b049e044-9171-4011-9c90-c334fa955321-config\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782632 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782682 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxgw\" (UniqueName: \"kubernetes.io/projected/b049e044-9171-4011-9c90-c334fa955321-kube-api-access-vkxgw\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b049e044-9171-4011-9c90-c334fa955321-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.782764 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b049e044-9171-4011-9c90-c334fa955321-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.783425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b049e044-9171-4011-9c90-c334fa955321-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.784511 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.784804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b049e044-9171-4011-9c90-c334fa955321-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.785015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b049e044-9171-4011-9c90-c334fa955321-config\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.788118 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.788817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.806910 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e044-9171-4011-9c90-c334fa955321-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.813896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.816417 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxgw\" (UniqueName: \"kubernetes.io/projected/b049e044-9171-4011-9c90-c334fa955321-kube-api-access-vkxgw\") pod \"ovsdbserver-nb-0\" (UID: \"b049e044-9171-4011-9c90-c334fa955321\") " pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:14 crc kubenswrapper[4858]: I0127 20:27:14.921854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.239980 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.241920 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.244455 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.254648 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.259941 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-j4zzx" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.260036 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.273175 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a7d533-55e2-4072-add8-4cd41613da8a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339184 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339262 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5cwd\" (UniqueName: \"kubernetes.io/projected/13a7d533-55e2-4072-add8-4cd41613da8a-kube-api-access-t5cwd\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13a7d533-55e2-4072-add8-4cd41613da8a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339381 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a7d533-55e2-4072-add8-4cd41613da8a-config\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339453 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.339486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a7d533-55e2-4072-add8-4cd41613da8a-config\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442177 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442208 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a7d533-55e2-4072-add8-4cd41613da8a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442331 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5cwd\" (UniqueName: \"kubernetes.io/projected/13a7d533-55e2-4072-add8-4cd41613da8a-kube-api-access-t5cwd\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442412 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13a7d533-55e2-4072-add8-4cd41613da8a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.442919 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.443062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13a7d533-55e2-4072-add8-4cd41613da8a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.443942 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a7d533-55e2-4072-add8-4cd41613da8a-config\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.451542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.454127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.454924 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13a7d533-55e2-4072-add8-4cd41613da8a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.458170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13a7d533-55e2-4072-add8-4cd41613da8a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.490031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5cwd\" (UniqueName: \"kubernetes.io/projected/13a7d533-55e2-4072-add8-4cd41613da8a-kube-api-access-t5cwd\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.502972 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"13a7d533-55e2-4072-add8-4cd41613da8a\") " pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:17 crc kubenswrapper[4858]: I0127 20:27:17.566082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 20:27:19 crc kubenswrapper[4858]: E0127 20:27:19.141262 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 27 20:27:19 crc kubenswrapper[4858]: E0127 20:27:19.141825 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 27 20:27:19 crc kubenswrapper[4858]: E0127 20:27:19.141975 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.129.56.46:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfg2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7b55fdf8d9-s86qf_openstack(99ac46f1-1c5d-465f-9947-651f842ce4a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:19 crc kubenswrapper[4858]: E0127 20:27:19.143170 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" podUID="99ac46f1-1c5d-465f-9947-651f842ce4a3" Jan 27 20:27:19 crc kubenswrapper[4858]: I0127 20:27:19.736154 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85bf9695fc-8jtxr"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.030396 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.102579 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-config\") pod \"99ac46f1-1c5d-465f-9947-651f842ce4a3\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.103032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfg2j\" (UniqueName: \"kubernetes.io/projected/99ac46f1-1c5d-465f-9947-651f842ce4a3-kube-api-access-vfg2j\") pod \"99ac46f1-1c5d-465f-9947-651f842ce4a3\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.103129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-dns-svc\") pod \"99ac46f1-1c5d-465f-9947-651f842ce4a3\" (UID: \"99ac46f1-1c5d-465f-9947-651f842ce4a3\") " Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.103644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-config" (OuterVolumeSpecName: "config") pod "99ac46f1-1c5d-465f-9947-651f842ce4a3" (UID: "99ac46f1-1c5d-465f-9947-651f842ce4a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.103932 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.104671 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99ac46f1-1c5d-465f-9947-651f842ce4a3" (UID: "99ac46f1-1c5d-465f-9947-651f842ce4a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.110436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ac46f1-1c5d-465f-9947-651f842ce4a3-kube-api-access-vfg2j" (OuterVolumeSpecName: "kube-api-access-vfg2j") pod "99ac46f1-1c5d-465f-9947-651f842ce4a3" (UID: "99ac46f1-1c5d-465f-9947-651f842ce4a3"). InnerVolumeSpecName "kube-api-access-vfg2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.207641 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfg2j\" (UniqueName: \"kubernetes.io/projected/99ac46f1-1c5d-465f-9947-651f842ce4a3-kube-api-access-vfg2j\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.209248 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99ac46f1-1c5d-465f-9947-651f842ce4a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.403539 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.454045 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.483377 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bc5cbcd7-8ckk8"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.495327 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.513623 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.530687 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jc5cc"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.538194 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.546400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f7f223cd-763c-408e-a3cf-067af57416af","Type":"ContainerStarted","Data":"a7f497306d93b655663b2ff3fa3cb26501fb3a0e19ad480a856876a8409e947b"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.547316 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2c5e060-865d-405e-937d-1450a1928f49","Type":"ContainerStarted","Data":"4dfbefcccfc3ed40267bcd975bd9f439f245314cb156c1fc306e540f3b2356e1"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.548016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jc5cc" event={"ID":"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa","Type":"ContainerStarted","Data":"b69050a2748a940d62954538714c06e458bbba34162f4fb31cb412bdecdb4d67"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.550043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" event={"ID":"5c43b9dc-5f8d-414d-8e09-47c33607ce47","Type":"ContainerStarted","Data":"f3d4636a89885029b0d4873abda384a64e269d9e451633050b89dee498edbf20"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.550769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerStarted","Data":"870ce57aa7b1ed8222f8ccc2b200f8df468376d3b7f6bb116352427d829698ef"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.552682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad881410-229a-4427-862b-8febd0e5ab61","Type":"ContainerStarted","Data":"573448dbdd215dea023daed0731040741e850d68a1f94dd36e5181061dec8d1a"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.554217 4858 generic.go:334] "Generic (PLEG): container finished" podID="298abc4c-aae8-461b-9742-7349f71de55f" containerID="b4bb3d71ae6b89bc15a14f30671ac4fad8dd1a28788578480ec9de91dfb3251e" exitCode=0 Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.554268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" event={"ID":"298abc4c-aae8-461b-9742-7349f71de55f","Type":"ContainerDied","Data":"b4bb3d71ae6b89bc15a14f30671ac4fad8dd1a28788578480ec9de91dfb3251e"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.554285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" event={"ID":"298abc4c-aae8-461b-9742-7349f71de55f","Type":"ContainerStarted","Data":"7a6fd0dd264e2052545e893d10c46df69d5607b4df18e5b5f6cd602ce41bbd59"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.561908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4768f41e-8ff0-4cec-b741-75f8902eb0e8","Type":"ContainerStarted","Data":"7a6bd365480791bc8ad99fa6d70ce9e3575e957d6a8abf16c2796a2416a968f6"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.567580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" event={"ID":"99ac46f1-1c5d-465f-9947-651f842ce4a3","Type":"ContainerDied","Data":"c6d3ac221be3cbed54ae98ef049916257a82b4b17ff094c80971597e6811e936"} Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.567816 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b55fdf8d9-s86qf" Jan 27 20:27:20 crc kubenswrapper[4858]: E0127 20:27:20.627993 4858 kubelet_pods.go:349] "Failed to prepare subPath for volumeMount of the container" err="lstat /var/lib/kubelet/pods/298abc4c-aae8-461b-9742-7349f71de55f/volumes/kubernetes.io~configmap/dns-svc/..2026_01_27_20_27_03.882448222/dns-svc failed: lstat /var/lib/kubelet/pods/298abc4c-aae8-461b-9742-7349f71de55f/volumes/kubernetes.io~configmap/dns-svc/..2026_01_27_20_27_03.882448222/dns-svc: no such file or directory" containerName="dnsmasq-dns" volumeMountName="dns-svc" Jan 27 20:27:20 crc kubenswrapper[4858]: E0127 20:27:20.628246 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:dnsmasq-dns,Image:38.129.56.46:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxjxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85bf9695fc-8jtxr_openstack(298abc4c-aae8-461b-9742-7349f71de55f): CreateContainerConfigError: failed to prepare subPath for volumeMount \"dns-svc\" of container \"dnsmasq-dns\"" logger="UnhandledError" Jan 27 20:27:20 crc kubenswrapper[4858]: E0127 20:27:20.629503 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerConfigError: \"failed to prepare subPath for volumeMount \\\"dns-svc\\\" of container \\\"dnsmasq-dns\\\"\"" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" podUID="298abc4c-aae8-461b-9742-7349f71de55f" Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.666761 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vhbc7"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.678873 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b55fdf8d9-s86qf"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.689819 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b55fdf8d9-s86qf"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.776060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.798631 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767476ffd5-svc5x"] Jan 27 20:27:20 crc kubenswrapper[4858]: W0127 20:27:20.816720 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod985b6324_1458_4ed6_9aaa_019ce90601f9.slice/crio-0288ad13cbaec22c3c2962e6e7fb7597bea7c08b4e01c1aea5652ff81ed270bd WatchSource:0}: Error finding container 0288ad13cbaec22c3c2962e6e7fb7597bea7c08b4e01c1aea5652ff81ed270bd: Status 404 returned error can't find the container with id 0288ad13cbaec22c3c2962e6e7fb7597bea7c08b4e01c1aea5652ff81ed270bd Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.822447 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: W0127 20:27:20.824238 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c539609_6c9e_46bc_a0d7_6a629e83ce17.slice/crio-c2118916db92c53173533b8d5777f63a65794de272877e7825fa17424265afb9 WatchSource:0}: Error finding container c2118916db92c53173533b8d5777f63a65794de272877e7825fa17424265afb9: Status 404 returned error can't find the container with id c2118916db92c53173533b8d5777f63a65794de272877e7825fa17424265afb9 Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.843432 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:27:20 crc kubenswrapper[4858]: W0127 20:27:20.860320 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd2f70df_955b_44ba_a1be_a2f9d06a862c.slice/crio-fe9b6d795f78150b19bb04054411702a43309f0dbbadb4aa6d6209076013cba2 WatchSource:0}: Error finding container fe9b6d795f78150b19bb04054411702a43309f0dbbadb4aa6d6209076013cba2: Status 404 returned error can't find the container with id fe9b6d795f78150b19bb04054411702a43309f0dbbadb4aa6d6209076013cba2 Jan 27 20:27:20 crc kubenswrapper[4858]: I0127 20:27:20.899662 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.595191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bd2f70df-955b-44ba-a1be-a2f9d06a862c","Type":"ContainerStarted","Data":"fe9b6d795f78150b19bb04054411702a43309f0dbbadb4aa6d6209076013cba2"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.602066 4858 generic.go:334] "Generic (PLEG): container finished" podID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerID="47c286361edd244e1eeae0005a93630848c8349b8de28c7a2574ec97a47516ed" exitCode=0 Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.602134 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" event={"ID":"985b6324-1458-4ed6-9aaa-019ce90601f9","Type":"ContainerDied","Data":"47c286361edd244e1eeae0005a93630848c8349b8de28c7a2574ec97a47516ed"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.602162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" event={"ID":"985b6324-1458-4ed6-9aaa-019ce90601f9","Type":"ContainerStarted","Data":"0288ad13cbaec22c3c2962e6e7fb7597bea7c08b4e01c1aea5652ff81ed270bd"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.604430 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b049e044-9171-4011-9c90-c334fa955321","Type":"ContainerStarted","Data":"31a250f1751d07d0b40faa941040490d06da6c2426940b8d278c7ba97cae2d58"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.609778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2","Type":"ContainerStarted","Data":"320b2cb1f3771689c2a0130fa8366cf8099a73e035ba71f3bed62bebd36d86c4"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.612587 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhbc7" event={"ID":"b28f0be1-aa4f-445d-95c3-1abd84b9c82a","Type":"ContainerStarted","Data":"97cfca55f79a3d15b58545ee82d651b0e66a6d2442c5920f61a6b39c6fde2661"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.619613 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c539609-6c9e-46bc-a0d7-6a629e83ce17","Type":"ContainerStarted","Data":"c2118916db92c53173533b8d5777f63a65794de272877e7825fa17424265afb9"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.635731 4858 generic.go:334] "Generic (PLEG): container finished" podID="5c43b9dc-5f8d-414d-8e09-47c33607ce47" containerID="948c9086667c6fa9b47b874817b330ca281c9a3bcbc0d28a0dee235cbbef95c7" exitCode=0 Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.637124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" event={"ID":"5c43b9dc-5f8d-414d-8e09-47c33607ce47","Type":"ContainerDied","Data":"948c9086667c6fa9b47b874817b330ca281c9a3bcbc0d28a0dee235cbbef95c7"} Jan 27 20:27:21 crc kubenswrapper[4858]: I0127 20:27:21.733285 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.086929 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ac46f1-1c5d-465f-9947-651f842ce4a3" path="/var/lib/kubelet/pods/99ac46f1-1c5d-465f-9947-651f842ce4a3/volumes" Jan 27 20:27:22 crc kubenswrapper[4858]: W0127 20:27:22.185617 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a7d533_55e2_4072_add8_4cd41613da8a.slice/crio-1dca41c773f861a76b2e822194ea1eac3ebb6f39d295868e2ad9de269867dfad WatchSource:0}: Error finding container 1dca41c773f861a76b2e822194ea1eac3ebb6f39d295868e2ad9de269867dfad: Status 404 returned error can't find the container with id 1dca41c773f861a76b2e822194ea1eac3ebb6f39d295868e2ad9de269867dfad Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.267054 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.368042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfr5\" (UniqueName: \"kubernetes.io/projected/5c43b9dc-5f8d-414d-8e09-47c33607ce47-kube-api-access-qzfr5\") pod \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.368108 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-config\") pod \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.368335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-dns-svc\") pod \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\" (UID: \"5c43b9dc-5f8d-414d-8e09-47c33607ce47\") " Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.394767 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c43b9dc-5f8d-414d-8e09-47c33607ce47-kube-api-access-qzfr5" (OuterVolumeSpecName: "kube-api-access-qzfr5") pod "5c43b9dc-5f8d-414d-8e09-47c33607ce47" (UID: "5c43b9dc-5f8d-414d-8e09-47c33607ce47"). InnerVolumeSpecName "kube-api-access-qzfr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.396153 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c43b9dc-5f8d-414d-8e09-47c33607ce47" (UID: "5c43b9dc-5f8d-414d-8e09-47c33607ce47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.410272 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-config" (OuterVolumeSpecName: "config") pod "5c43b9dc-5f8d-414d-8e09-47c33607ce47" (UID: "5c43b9dc-5f8d-414d-8e09-47c33607ce47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.471005 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.471047 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfr5\" (UniqueName: \"kubernetes.io/projected/5c43b9dc-5f8d-414d-8e09-47c33607ce47-kube-api-access-qzfr5\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.471059 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c43b9dc-5f8d-414d-8e09-47c33607ce47-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.650341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"13a7d533-55e2-4072-add8-4cd41613da8a","Type":"ContainerStarted","Data":"1dca41c773f861a76b2e822194ea1eac3ebb6f39d295868e2ad9de269867dfad"} Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.652760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" event={"ID":"5c43b9dc-5f8d-414d-8e09-47c33607ce47","Type":"ContainerDied","Data":"f3d4636a89885029b0d4873abda384a64e269d9e451633050b89dee498edbf20"} Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.652840 4858 scope.go:117] "RemoveContainer" containerID="948c9086667c6fa9b47b874817b330ca281c9a3bcbc0d28a0dee235cbbef95c7" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.652875 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bc5cbcd7-8ckk8" Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.709360 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bc5cbcd7-8ckk8"] Jan 27 20:27:22 crc kubenswrapper[4858]: I0127 20:27:22.715269 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bc5cbcd7-8ckk8"] Jan 27 20:27:24 crc kubenswrapper[4858]: I0127 20:27:24.082114 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c43b9dc-5f8d-414d-8e09-47c33607ce47" path="/var/lib/kubelet/pods/5c43b9dc-5f8d-414d-8e09-47c33607ce47/volumes" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.137176 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.138017 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.138235 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.129.56.46:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59h666h69h565h79h5cfh58dh5dbh84h69h66chc5h588hdh547h575hbfh58ch567h65h6fh578h5cdh678h54chf8h88hffh5c6h648h646h5fcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cffbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-vhbc7_openstack(b28f0be1-aa4f-445d-95c3-1abd84b9c82a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.139996 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-vhbc7" podUID="b28f0be1-aa4f-445d-95c3-1abd84b9c82a" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.268510 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.268605 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.268791 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh8mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(6c539609-6c9e-46bc-a0d7-6a629e83ce17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.270723 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="6c539609-6c9e-46bc-a0d7-6a629e83ce17" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.293110 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.293180 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.293327 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9h4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.294588 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.761617 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.762928 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="6c539609-6c9e-46bc-a0d7-6a629e83ce17" Jan 27 20:27:33 crc kubenswrapper[4858]: E0127 20:27:33.762964 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-vhbc7" podUID="b28f0be1-aa4f-445d-95c3-1abd84b9c82a" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:34.999628 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.001440 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.001640 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8985,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(4768f41e-8ff0-4cec-b741-75f8902eb0e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.003538 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="4768f41e-8ff0-4cec-b741-75f8902eb0e8" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.077933 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.077994 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.078126 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flxqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(f7f223cd-763c-408e-a3cf-067af57416af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.079342 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="f7f223cd-763c-408e-a3cf-067af57416af" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.565529 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.565623 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.565873 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.129.56.46:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd7h5b9hf7h69h555h5b6h65bh97h7ch94h584hb8h5fch5c8h59dh56chf4h587h5c8h54h87h5c8h6dh5ddh648h674hbbh684h698h66ch65fh699q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkxgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(b049e044-9171-4011-9c90-c334fa955321): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.733014 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.733089 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.733283 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.129.56.46:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cdhb9h5c5h559h66bhc8h57h5bch55fh6dh5h646h9dh5fbh56fh66fh5dch57bh98h75hd5h54fh67chbh7h55bh5d9h565h5b9h669h5b9h556q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5cwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(13a7d533-55e2-4072-add8-4cd41613da8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.782330 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="f7f223cd-763c-408e-a3cf-067af57416af" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.786461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="4768f41e-8ff0-4cec-b741-75f8902eb0e8" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.995499 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.995625 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.995911 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.129.56.46:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59h666h69h565h79h5cfh58dh5dbh84h69h66chc5h588hdh547h575hbfh58ch567h65h6fh578h5cdh678h54chf8h88hffh5c6h648h646h5fcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtlss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-jc5cc_openstack(d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:27:35 crc kubenswrapper[4858]: E0127 20:27:35.997577 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-jc5cc" podUID="d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa" Jan 27 20:27:36 crc kubenswrapper[4858]: I0127 20:27:36.799734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" event={"ID":"298abc4c-aae8-461b-9742-7349f71de55f","Type":"ContainerStarted","Data":"d4ecf005a39e2c364014cc20abb222b9c14ceb5da59f50b17651cc2eda8c7e88"} Jan 27 20:27:36 crc kubenswrapper[4858]: E0127 20:27:36.800589 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-jc5cc" podUID="d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa" Jan 27 20:27:36 crc kubenswrapper[4858]: I0127 20:27:36.800755 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:36 crc kubenswrapper[4858]: I0127 20:27:36.839180 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" podStartSLOduration=33.769762493 podStartE2EDuration="33.839147256s" podCreationTimestamp="2026-01-27 20:27:03 +0000 UTC" firstStartedPulling="2026-01-27 20:27:19.781675434 +0000 UTC m=+1184.489491140" lastFinishedPulling="2026-01-27 20:27:19.851060197 +0000 UTC m=+1184.558875903" observedRunningTime="2026-01-27 20:27:36.819462113 +0000 UTC m=+1201.527277819" watchObservedRunningTime="2026-01-27 20:27:36.839147256 +0000 UTC m=+1201.546962962" Jan 27 20:27:37 crc kubenswrapper[4858]: E0127 20:27:37.077267 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 27 20:27:37 crc kubenswrapper[4858]: E0127 20:27:37.077446 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 27 20:27:37 crc kubenswrapper[4858]: E0127 20:27:37.077827 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6mtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(d2c5e060-865d-405e-937d-1450a1928f49): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 20:27:37 crc kubenswrapper[4858]: E0127 20:27:37.079045 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="d2c5e060-865d-405e-937d-1450a1928f49" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.341004 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wzfwm"] Jan 27 20:27:37 crc kubenswrapper[4858]: E0127 20:27:37.342416 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c43b9dc-5f8d-414d-8e09-47c33607ce47" containerName="init" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.342513 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c43b9dc-5f8d-414d-8e09-47c33607ce47" containerName="init" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.342782 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c43b9dc-5f8d-414d-8e09-47c33607ce47" containerName="init" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.343591 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.350981 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.373007 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wzfwm"] Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.499937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e0f41c-e13e-41d1-bc48-71c6ef96994c-combined-ca-bundle\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.500017 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e0f41c-e13e-41d1-bc48-71c6ef96994c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.500058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/51e0f41c-e13e-41d1-bc48-71c6ef96994c-ovs-rundir\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.500113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/51e0f41c-e13e-41d1-bc48-71c6ef96994c-ovn-rundir\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.500207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvj4g\" (UniqueName: \"kubernetes.io/projected/51e0f41c-e13e-41d1-bc48-71c6ef96994c-kube-api-access-fvj4g\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.500246 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e0f41c-e13e-41d1-bc48-71c6ef96994c-config\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.549111 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767476ffd5-svc5x"] Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.590014 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b8bd89969-bzsqx"] Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.591905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.597258 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.601879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e0f41c-e13e-41d1-bc48-71c6ef96994c-combined-ca-bundle\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.601957 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e0f41c-e13e-41d1-bc48-71c6ef96994c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.602021 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/51e0f41c-e13e-41d1-bc48-71c6ef96994c-ovs-rundir\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.602076 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/51e0f41c-e13e-41d1-bc48-71c6ef96994c-ovn-rundir\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.602141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvj4g\" (UniqueName: \"kubernetes.io/projected/51e0f41c-e13e-41d1-bc48-71c6ef96994c-kube-api-access-fvj4g\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.602173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e0f41c-e13e-41d1-bc48-71c6ef96994c-config\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.603534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/51e0f41c-e13e-41d1-bc48-71c6ef96994c-ovn-rundir\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.604740 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/51e0f41c-e13e-41d1-bc48-71c6ef96994c-ovs-rundir\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.605344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51e0f41c-e13e-41d1-bc48-71c6ef96994c-config\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.608935 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8bd89969-bzsqx"] Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.617507 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e0f41c-e13e-41d1-bc48-71c6ef96994c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.621780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e0f41c-e13e-41d1-bc48-71c6ef96994c-combined-ca-bundle\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.660625 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvj4g\" (UniqueName: \"kubernetes.io/projected/51e0f41c-e13e-41d1-bc48-71c6ef96994c-kube-api-access-fvj4g\") pod \"ovn-controller-metrics-wzfwm\" (UID: \"51e0f41c-e13e-41d1-bc48-71c6ef96994c\") " pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.704487 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.704620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-dns-svc\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.704656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzh4b\" (UniqueName: \"kubernetes.io/projected/6428c38a-0174-4a1d-b83a-afca9a457b08-kube-api-access-mzh4b\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.704705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-config\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.767801 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wzfwm" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.773523 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bf9695fc-8jtxr"] Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.806305 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.806394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-dns-svc\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.806427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzh4b\" (UniqueName: \"kubernetes.io/projected/6428c38a-0174-4a1d-b83a-afca9a457b08-kube-api-access-mzh4b\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.806486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-config\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.808328 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.815539 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-config\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.815794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-dns-svc\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.831337 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69ccbdd8fc-rk8bc"] Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.839762 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.848505 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.864459 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69ccbdd8fc-rk8bc"] Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.879388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bd2f70df-955b-44ba-a1be-a2f9d06a862c","Type":"ContainerStarted","Data":"1314375e37b71191f78e8d6a91b037662ce1ef23f35a342141226466d1546707"} Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.879864 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.881106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzh4b\" (UniqueName: \"kubernetes.io/projected/6428c38a-0174-4a1d-b83a-afca9a457b08-kube-api-access-mzh4b\") pod \"dnsmasq-dns-5b8bd89969-bzsqx\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.898713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" event={"ID":"985b6324-1458-4ed6-9aaa-019ce90601f9","Type":"ContainerStarted","Data":"50c851b04e8ae8abd76ffb4703a74dc21080067cc77eb1f27019e4ad406afe54"} Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.903443 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:37 crc kubenswrapper[4858]: E0127 20:27:37.904489 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="d2c5e060-865d-405e-937d-1450a1928f49" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.910021 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.920346 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.535793611999999 podStartE2EDuration="30.920315827s" podCreationTimestamp="2026-01-27 20:27:07 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.86293033 +0000 UTC m=+1185.570746036" lastFinishedPulling="2026-01-27 20:27:36.247452545 +0000 UTC m=+1200.955268251" observedRunningTime="2026-01-27 20:27:37.90299214 +0000 UTC m=+1202.610807856" watchObservedRunningTime="2026-01-27 20:27:37.920315827 +0000 UTC m=+1202.628131533" Jan 27 20:27:37 crc kubenswrapper[4858]: I0127 20:27:37.945671 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" podStartSLOduration=34.945641416 podStartE2EDuration="34.945641416s" podCreationTimestamp="2026-01-27 20:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:27:37.943407814 +0000 UTC m=+1202.651223540" watchObservedRunningTime="2026-01-27 20:27:37.945641416 +0000 UTC m=+1202.653457112" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.011157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-dns-svc\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.011229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5pj\" (UniqueName: \"kubernetes.io/projected/853aaea7-10f1-421e-9ed2-e022a635e124-kube-api-access-gx5pj\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.011274 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-nb\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.011296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-config\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.011458 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-sb\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.113907 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-sb\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.114480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-dns-svc\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.114529 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5pj\" (UniqueName: \"kubernetes.io/projected/853aaea7-10f1-421e-9ed2-e022a635e124-kube-api-access-gx5pj\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.114579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-nb\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.114603 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-config\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.115341 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-sb\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.115658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-config\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.116322 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-nb\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.117025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-dns-svc\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.293289 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5pj\" (UniqueName: \"kubernetes.io/projected/853aaea7-10f1-421e-9ed2-e022a635e124-kube-api-access-gx5pj\") pod \"dnsmasq-dns-69ccbdd8fc-rk8bc\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.437654 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wzfwm"] Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.469775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.621194 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8bd89969-bzsqx"] Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.905156 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" podUID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerName="dnsmasq-dns" containerID="cri-o://50c851b04e8ae8abd76ffb4703a74dc21080067cc77eb1f27019e4ad406afe54" gracePeriod=10 Jan 27 20:27:38 crc kubenswrapper[4858]: I0127 20:27:38.905532 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" podUID="298abc4c-aae8-461b-9742-7349f71de55f" containerName="dnsmasq-dns" containerID="cri-o://d4ecf005a39e2c364014cc20abb222b9c14ceb5da59f50b17651cc2eda8c7e88" gracePeriod=10 Jan 27 20:27:39 crc kubenswrapper[4858]: I0127 20:27:39.917341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" event={"ID":"6428c38a-0174-4a1d-b83a-afca9a457b08","Type":"ContainerStarted","Data":"2223ff1a04ea52a8d17f1129fb40e3f88c8727819f0f4faa565e65b7d80bff47"} Jan 27 20:27:39 crc kubenswrapper[4858]: I0127 20:27:39.922139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad881410-229a-4427-862b-8febd0e5ab61","Type":"ContainerStarted","Data":"ca76863e730916538ae7127ed44c1cadecfed9ca3f49b484cc25424b7224480b"} Jan 27 20:27:39 crc kubenswrapper[4858]: I0127 20:27:39.928802 4858 generic.go:334] "Generic (PLEG): container finished" podID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerID="50c851b04e8ae8abd76ffb4703a74dc21080067cc77eb1f27019e4ad406afe54" exitCode=0 Jan 27 20:27:39 crc kubenswrapper[4858]: I0127 20:27:39.928909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" event={"ID":"985b6324-1458-4ed6-9aaa-019ce90601f9","Type":"ContainerDied","Data":"50c851b04e8ae8abd76ffb4703a74dc21080067cc77eb1f27019e4ad406afe54"} Jan 27 20:27:39 crc kubenswrapper[4858]: I0127 20:27:39.930775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wzfwm" event={"ID":"51e0f41c-e13e-41d1-bc48-71c6ef96994c","Type":"ContainerStarted","Data":"5bda1ba76b14e8028c71890e53773c8f98956c3bcab9df36be9b67ca53325f1d"} Jan 27 20:27:39 crc kubenswrapper[4858]: I0127 20:27:39.936262 4858 generic.go:334] "Generic (PLEG): container finished" podID="298abc4c-aae8-461b-9742-7349f71de55f" containerID="d4ecf005a39e2c364014cc20abb222b9c14ceb5da59f50b17651cc2eda8c7e88" exitCode=0 Jan 27 20:27:39 crc kubenswrapper[4858]: I0127 20:27:39.936307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" event={"ID":"298abc4c-aae8-461b-9742-7349f71de55f","Type":"ContainerDied","Data":"d4ecf005a39e2c364014cc20abb222b9c14ceb5da59f50b17651cc2eda8c7e88"} Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.708396 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.719271 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.910821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-config\") pod \"298abc4c-aae8-461b-9742-7349f71de55f\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.911462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxjxp\" (UniqueName: \"kubernetes.io/projected/298abc4c-aae8-461b-9742-7349f71de55f-kube-api-access-mxjxp\") pod \"298abc4c-aae8-461b-9742-7349f71de55f\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.911607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-dns-svc\") pod \"985b6324-1458-4ed6-9aaa-019ce90601f9\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.911626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-config\") pod \"985b6324-1458-4ed6-9aaa-019ce90601f9\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.912018 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2fs8\" (UniqueName: \"kubernetes.io/projected/985b6324-1458-4ed6-9aaa-019ce90601f9-kube-api-access-l2fs8\") pod \"985b6324-1458-4ed6-9aaa-019ce90601f9\" (UID: \"985b6324-1458-4ed6-9aaa-019ce90601f9\") " Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.912041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-dns-svc\") pod \"298abc4c-aae8-461b-9742-7349f71de55f\" (UID: \"298abc4c-aae8-461b-9742-7349f71de55f\") " Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.922104 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/298abc4c-aae8-461b-9742-7349f71de55f-kube-api-access-mxjxp" (OuterVolumeSpecName: "kube-api-access-mxjxp") pod "298abc4c-aae8-461b-9742-7349f71de55f" (UID: "298abc4c-aae8-461b-9742-7349f71de55f"). InnerVolumeSpecName "kube-api-access-mxjxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.939470 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985b6324-1458-4ed6-9aaa-019ce90601f9-kube-api-access-l2fs8" (OuterVolumeSpecName: "kube-api-access-l2fs8") pod "985b6324-1458-4ed6-9aaa-019ce90601f9" (UID: "985b6324-1458-4ed6-9aaa-019ce90601f9"). InnerVolumeSpecName "kube-api-access-l2fs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.968659 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-config" (OuterVolumeSpecName: "config") pod "985b6324-1458-4ed6-9aaa-019ce90601f9" (UID: "985b6324-1458-4ed6-9aaa-019ce90601f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.975693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-config" (OuterVolumeSpecName: "config") pod "298abc4c-aae8-461b-9742-7349f71de55f" (UID: "298abc4c-aae8-461b-9742-7349f71de55f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:40 crc kubenswrapper[4858]: I0127 20:27:40.988602 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" event={"ID":"6428c38a-0174-4a1d-b83a-afca9a457b08","Type":"ContainerStarted","Data":"e78c85a4de02a8c6ba4a8fe7344f10a97b7b8b0bf8563bfcc3ae7c30ff0bbd9b"} Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:40.998807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "298abc4c-aae8-461b-9742-7349f71de55f" (UID: "298abc4c-aae8-461b-9742-7349f71de55f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.013983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerStarted","Data":"b439e35aa3ebaa567247e0fa57cfdd25a6b0cef090835b9e2bb45d1e2b49fc66"} Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.022398 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2fs8\" (UniqueName: \"kubernetes.io/projected/985b6324-1458-4ed6-9aaa-019ce90601f9-kube-api-access-l2fs8\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.022433 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.022445 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/298abc4c-aae8-461b-9742-7349f71de55f-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.022456 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxjxp\" (UniqueName: \"kubernetes.io/projected/298abc4c-aae8-461b-9742-7349f71de55f-kube-api-access-mxjxp\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.022467 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.034732 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69ccbdd8fc-rk8bc"] Jan 27 20:27:41 crc kubenswrapper[4858]: W0127 20:27:41.038235 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod853aaea7_10f1_421e_9ed2_e022a635e124.slice/crio-59f2ed29e8ce91b84bd04fade37b367c8a4276ea703ea689fc8f2e46e964a3a7 WatchSource:0}: Error finding container 59f2ed29e8ce91b84bd04fade37b367c8a4276ea703ea689fc8f2e46e964a3a7: Status 404 returned error can't find the container with id 59f2ed29e8ce91b84bd04fade37b367c8a4276ea703ea689fc8f2e46e964a3a7 Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.052944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" event={"ID":"985b6324-1458-4ed6-9aaa-019ce90601f9","Type":"ContainerDied","Data":"0288ad13cbaec22c3c2962e6e7fb7597bea7c08b4e01c1aea5652ff81ed270bd"} Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.053061 4858 scope.go:117] "RemoveContainer" containerID="50c851b04e8ae8abd76ffb4703a74dc21080067cc77eb1f27019e4ad406afe54" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.053334 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767476ffd5-svc5x" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.054460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "985b6324-1458-4ed6-9aaa-019ce90601f9" (UID: "985b6324-1458-4ed6-9aaa-019ce90601f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.069703 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.072131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85bf9695fc-8jtxr" event={"ID":"298abc4c-aae8-461b-9742-7349f71de55f","Type":"ContainerDied","Data":"7a6fd0dd264e2052545e893d10c46df69d5607b4df18e5b5f6cd602ce41bbd59"} Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.124049 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/985b6324-1458-4ed6-9aaa-019ce90601f9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.205786 4858 scope.go:117] "RemoveContainer" containerID="47c286361edd244e1eeae0005a93630848c8349b8de28c7a2574ec97a47516ed" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.206599 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85bf9695fc-8jtxr"] Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.232443 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85bf9695fc-8jtxr"] Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.258754 4858 scope.go:117] "RemoveContainer" containerID="d4ecf005a39e2c364014cc20abb222b9c14ceb5da59f50b17651cc2eda8c7e88" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.309827 4858 scope.go:117] "RemoveContainer" containerID="b4bb3d71ae6b89bc15a14f30671ac4fad8dd1a28788578480ec9de91dfb3251e" Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.416159 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767476ffd5-svc5x"] Jan 27 20:27:41 crc kubenswrapper[4858]: I0127 20:27:41.424073 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-767476ffd5-svc5x"] Jan 27 20:27:41 crc kubenswrapper[4858]: E0127 20:27:41.491206 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="13a7d533-55e2-4072-add8-4cd41613da8a" Jan 27 20:27:41 crc kubenswrapper[4858]: E0127 20:27:41.502093 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="b049e044-9171-4011-9c90-c334fa955321" Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.084043 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="298abc4c-aae8-461b-9742-7349f71de55f" path="/var/lib/kubelet/pods/298abc4c-aae8-461b-9742-7349f71de55f/volumes" Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.084964 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985b6324-1458-4ed6-9aaa-019ce90601f9" path="/var/lib/kubelet/pods/985b6324-1458-4ed6-9aaa-019ce90601f9/volumes" Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.086481 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b049e044-9171-4011-9c90-c334fa955321","Type":"ContainerStarted","Data":"71af3ec283408f363a02de7e917bc74e33c6da0009006f89457e53bcec87a783"} Jan 27 20:27:42 crc kubenswrapper[4858]: E0127 20:27:42.089234 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="b049e044-9171-4011-9c90-c334fa955321" Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.089923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"13a7d533-55e2-4072-add8-4cd41613da8a","Type":"ContainerStarted","Data":"34a37eecd15b6888683b7efeeb474bda5d60a7716cb487b4af5a8f99f9d66140"} Jan 27 20:27:42 crc kubenswrapper[4858]: E0127 20:27:42.092094 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="13a7d533-55e2-4072-add8-4cd41613da8a" Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.092892 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wzfwm" event={"ID":"51e0f41c-e13e-41d1-bc48-71c6ef96994c","Type":"ContainerStarted","Data":"8ca494a94baa7b5f106f3de311b5d65eb52e1f926224895fe6d2a7113f6de550"} Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.101496 4858 generic.go:334] "Generic (PLEG): container finished" podID="853aaea7-10f1-421e-9ed2-e022a635e124" containerID="6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c" exitCode=0 Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.101609 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" event={"ID":"853aaea7-10f1-421e-9ed2-e022a635e124","Type":"ContainerDied","Data":"6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c"} Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.101786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" event={"ID":"853aaea7-10f1-421e-9ed2-e022a635e124","Type":"ContainerStarted","Data":"59f2ed29e8ce91b84bd04fade37b367c8a4276ea703ea689fc8f2e46e964a3a7"} Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.106138 4858 generic.go:334] "Generic (PLEG): container finished" podID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerID="e78c85a4de02a8c6ba4a8fe7344f10a97b7b8b0bf8563bfcc3ae7c30ff0bbd9b" exitCode=0 Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.106828 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" event={"ID":"6428c38a-0174-4a1d-b83a-afca9a457b08","Type":"ContainerDied","Data":"e78c85a4de02a8c6ba4a8fe7344f10a97b7b8b0bf8563bfcc3ae7c30ff0bbd9b"} Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.106859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" event={"ID":"6428c38a-0174-4a1d-b83a-afca9a457b08","Type":"ContainerStarted","Data":"3d879a27d8a6768144d1dc49555b88f49e95ccd17d8edb36bf06e54c7228c71c"} Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.107047 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.164399 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wzfwm" podStartSLOduration=4.286305761 podStartE2EDuration="5.164378136s" podCreationTimestamp="2026-01-27 20:27:37 +0000 UTC" firstStartedPulling="2026-01-27 20:27:39.904715468 +0000 UTC m=+1204.612531174" lastFinishedPulling="2026-01-27 20:27:40.782787843 +0000 UTC m=+1205.490603549" observedRunningTime="2026-01-27 20:27:42.141436963 +0000 UTC m=+1206.849252689" watchObservedRunningTime="2026-01-27 20:27:42.164378136 +0000 UTC m=+1206.872193842" Jan 27 20:27:42 crc kubenswrapper[4858]: I0127 20:27:42.230917 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" podStartSLOduration=5.230835587 podStartE2EDuration="5.230835587s" podCreationTimestamp="2026-01-27 20:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:27:42.195532134 +0000 UTC m=+1206.903347860" watchObservedRunningTime="2026-01-27 20:27:42.230835587 +0000 UTC m=+1206.938651293" Jan 27 20:27:43 crc kubenswrapper[4858]: I0127 20:27:43.132677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" event={"ID":"853aaea7-10f1-421e-9ed2-e022a635e124","Type":"ContainerStarted","Data":"988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b"} Jan 27 20:27:43 crc kubenswrapper[4858]: E0127 20:27:43.135361 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="13a7d533-55e2-4072-add8-4cd41613da8a" Jan 27 20:27:43 crc kubenswrapper[4858]: E0127 20:27:43.135370 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="b049e044-9171-4011-9c90-c334fa955321" Jan 27 20:27:43 crc kubenswrapper[4858]: I0127 20:27:43.153410 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" podStartSLOduration=6.153381298 podStartE2EDuration="6.153381298s" podCreationTimestamp="2026-01-27 20:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:27:43.149981774 +0000 UTC m=+1207.857797500" watchObservedRunningTime="2026-01-27 20:27:43.153381298 +0000 UTC m=+1207.861197004" Jan 27 20:27:43 crc kubenswrapper[4858]: I0127 20:27:43.360151 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 20:27:43 crc kubenswrapper[4858]: I0127 20:27:43.470632 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:45 crc kubenswrapper[4858]: I0127 20:27:45.154061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhbc7" event={"ID":"b28f0be1-aa4f-445d-95c3-1abd84b9c82a","Type":"ContainerStarted","Data":"4cc330dbd80107b621d9d8b1c89301c2e0624c5f954772454359a5fbb7a2121e"} Jan 27 20:27:46 crc kubenswrapper[4858]: I0127 20:27:46.174671 4858 generic.go:334] "Generic (PLEG): container finished" podID="b28f0be1-aa4f-445d-95c3-1abd84b9c82a" containerID="4cc330dbd80107b621d9d8b1c89301c2e0624c5f954772454359a5fbb7a2121e" exitCode=0 Jan 27 20:27:46 crc kubenswrapper[4858]: I0127 20:27:46.174735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhbc7" event={"ID":"b28f0be1-aa4f-445d-95c3-1abd84b9c82a","Type":"ContainerDied","Data":"4cc330dbd80107b621d9d8b1c89301c2e0624c5f954772454359a5fbb7a2121e"} Jan 27 20:27:47 crc kubenswrapper[4858]: I0127 20:27:47.911705 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.214239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhbc7" event={"ID":"b28f0be1-aa4f-445d-95c3-1abd84b9c82a","Type":"ContainerStarted","Data":"c85f07921a965e299110e96797eb16b49a4de72aa5807316aa6b05b2788b13d2"} Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.225820 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4768f41e-8ff0-4cec-b741-75f8902eb0e8","Type":"ContainerStarted","Data":"8905bb0cea73ea7102aa2836d3b542c9c44b028b39f0d24a17dd6c8ebf53bb9e"} Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.257081 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerID="b439e35aa3ebaa567247e0fa57cfdd25a6b0cef090835b9e2bb45d1e2b49fc66" exitCode=0 Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.257146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerDied","Data":"b439e35aa3ebaa567247e0fa57cfdd25a6b0cef090835b9e2bb45d1e2b49fc66"} Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.283983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2","Type":"ContainerStarted","Data":"a7f9036c77e96dfe20e59ced87acad06df172066fcc8ff6ae5ba1b818cc4ed32"} Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.472456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.566495 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8bd89969-bzsqx"] Jan 27 20:27:48 crc kubenswrapper[4858]: I0127 20:27:48.566816 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" podUID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerName="dnsmasq-dns" containerID="cri-o://3d879a27d8a6768144d1dc49555b88f49e95ccd17d8edb36bf06e54c7228c71c" gracePeriod=10 Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.306195 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vhbc7" event={"ID":"b28f0be1-aa4f-445d-95c3-1abd84b9c82a","Type":"ContainerStarted","Data":"7a00bc1b883683742296e535bcb4b85a5aa22c098933e18648ba2a2e2fa10e12"} Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.306516 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.313482 4858 generic.go:334] "Generic (PLEG): container finished" podID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerID="3d879a27d8a6768144d1dc49555b88f49e95ccd17d8edb36bf06e54c7228c71c" exitCode=0 Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.313539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" event={"ID":"6428c38a-0174-4a1d-b83a-afca9a457b08","Type":"ContainerDied","Data":"3d879a27d8a6768144d1dc49555b88f49e95ccd17d8edb36bf06e54c7228c71c"} Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.313657 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" event={"ID":"6428c38a-0174-4a1d-b83a-afca9a457b08","Type":"ContainerDied","Data":"2223ff1a04ea52a8d17f1129fb40e3f88c8727819f0f4faa565e65b7d80bff47"} Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.313675 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2223ff1a04ea52a8d17f1129fb40e3f88c8727819f0f4faa565e65b7d80bff47" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.334288 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-vhbc7" podStartSLOduration=12.883123342 podStartE2EDuration="36.334264106s" podCreationTimestamp="2026-01-27 20:27:13 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.674527256 +0000 UTC m=+1185.382342962" lastFinishedPulling="2026-01-27 20:27:44.12566802 +0000 UTC m=+1208.833483726" observedRunningTime="2026-01-27 20:27:49.333238227 +0000 UTC m=+1214.041053953" watchObservedRunningTime="2026-01-27 20:27:49.334264106 +0000 UTC m=+1214.042079812" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.388892 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.552277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-dns-svc\") pod \"6428c38a-0174-4a1d-b83a-afca9a457b08\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.552786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzh4b\" (UniqueName: \"kubernetes.io/projected/6428c38a-0174-4a1d-b83a-afca9a457b08-kube-api-access-mzh4b\") pod \"6428c38a-0174-4a1d-b83a-afca9a457b08\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.552876 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-ovsdbserver-nb\") pod \"6428c38a-0174-4a1d-b83a-afca9a457b08\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.552912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-config\") pod \"6428c38a-0174-4a1d-b83a-afca9a457b08\" (UID: \"6428c38a-0174-4a1d-b83a-afca9a457b08\") " Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.603720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6428c38a-0174-4a1d-b83a-afca9a457b08-kube-api-access-mzh4b" (OuterVolumeSpecName: "kube-api-access-mzh4b") pod "6428c38a-0174-4a1d-b83a-afca9a457b08" (UID: "6428c38a-0174-4a1d-b83a-afca9a457b08"). InnerVolumeSpecName "kube-api-access-mzh4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.638570 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-config" (OuterVolumeSpecName: "config") pod "6428c38a-0174-4a1d-b83a-afca9a457b08" (UID: "6428c38a-0174-4a1d-b83a-afca9a457b08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.653260 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6428c38a-0174-4a1d-b83a-afca9a457b08" (UID: "6428c38a-0174-4a1d-b83a-afca9a457b08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.655154 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.655176 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.655188 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzh4b\" (UniqueName: \"kubernetes.io/projected/6428c38a-0174-4a1d-b83a-afca9a457b08-kube-api-access-mzh4b\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.659170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6428c38a-0174-4a1d-b83a-afca9a457b08" (UID: "6428c38a-0174-4a1d-b83a-afca9a457b08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:27:49 crc kubenswrapper[4858]: I0127 20:27:49.757461 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6428c38a-0174-4a1d-b83a-afca9a457b08-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.227619 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566979bdbf-kg5cb"] Jan 27 20:27:50 crc kubenswrapper[4858]: E0127 20:27:50.228056 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.232258 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: E0127 20:27:50.232363 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298abc4c-aae8-461b-9742-7349f71de55f" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.232377 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="298abc4c-aae8-461b-9742-7349f71de55f" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: E0127 20:27:50.232445 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="298abc4c-aae8-461b-9742-7349f71de55f" containerName="init" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.232455 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="298abc4c-aae8-461b-9742-7349f71de55f" containerName="init" Jan 27 20:27:50 crc kubenswrapper[4858]: E0127 20:27:50.232473 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerName="init" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.232482 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerName="init" Jan 27 20:27:50 crc kubenswrapper[4858]: E0127 20:27:50.232525 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerName="init" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.232535 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerName="init" Jan 27 20:27:50 crc kubenswrapper[4858]: E0127 20:27:50.232570 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.232583 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.232980 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="985b6324-1458-4ed6-9aaa-019ce90601f9" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.233000 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="298abc4c-aae8-461b-9742-7349f71de55f" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.233015 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6428c38a-0174-4a1d-b83a-afca9a457b08" containerName="dnsmasq-dns" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.236874 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.241707 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566979bdbf-kg5cb"] Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.338593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f7f223cd-763c-408e-a3cf-067af57416af","Type":"ContainerStarted","Data":"c14df395bc87bb073e216790ac1de5bb9363c7f4cfcd612b98a7823758bdd431"} Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.346921 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c539609-6c9e-46bc-a0d7-6a629e83ce17","Type":"ContainerStarted","Data":"9159361669f202d325e9ed3fa878b087c229e8652e66a74175e9b39b3e2e5fb6"} Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.347473 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8bd89969-bzsqx" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.347636 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.369483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-config\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.369624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-nb\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.369676 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-dns-svc\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.369699 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6j5\" (UniqueName: \"kubernetes.io/projected/a60150fc-ab94-47ae-b1e8-247c240f0995-kube-api-access-xf6j5\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.369756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-sb\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.394340 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8bd89969-bzsqx"] Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.422896 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b8bd89969-bzsqx"] Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.474645 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-sb\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.474778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-config\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.474974 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-nb\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.475005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-dns-svc\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.475060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf6j5\" (UniqueName: \"kubernetes.io/projected/a60150fc-ab94-47ae-b1e8-247c240f0995-kube-api-access-xf6j5\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.478243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-config\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.478436 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-nb\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.479244 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-sb\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.479391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-dns-svc\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.500914 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf6j5\" (UniqueName: \"kubernetes.io/projected/a60150fc-ab94-47ae-b1e8-247c240f0995-kube-api-access-xf6j5\") pod \"dnsmasq-dns-566979bdbf-kg5cb\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:50 crc kubenswrapper[4858]: I0127 20:27:50.564177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.148621 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566979bdbf-kg5cb"] Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.358342 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" event={"ID":"a60150fc-ab94-47ae-b1e8-247c240f0995","Type":"ContainerStarted","Data":"771f3b1264cbb5c3416a3d86ea6ab851a1b0901185e5c9ac6c31ddf115f25836"} Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.417157 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.424289 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.437370 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.437827 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kwqdn" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.438034 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.438158 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.447131 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.513269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177247c1-763d-4d0c-81ba-f538937f0008-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.513397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/177247c1-763d-4d0c-81ba-f538937f0008-cache\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.513446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/177247c1-763d-4d0c-81ba-f538937f0008-lock\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.513486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.513521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.513572 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcprd\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-kube-api-access-jcprd\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.618448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177247c1-763d-4d0c-81ba-f538937f0008-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.618739 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/177247c1-763d-4d0c-81ba-f538937f0008-cache\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.618777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/177247c1-763d-4d0c-81ba-f538937f0008-lock\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.618853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.618909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.619393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/177247c1-763d-4d0c-81ba-f538937f0008-cache\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.619428 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/177247c1-763d-4d0c-81ba-f538937f0008-lock\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.619624 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: E0127 20:27:51.619763 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 20:27:51 crc kubenswrapper[4858]: E0127 20:27:51.619785 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 20:27:51 crc kubenswrapper[4858]: E0127 20:27:51.619879 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift podName:177247c1-763d-4d0c-81ba-f538937f0008 nodeName:}" failed. No retries permitted until 2026-01-27 20:27:52.119849849 +0000 UTC m=+1216.827665555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift") pod "swift-storage-0" (UID: "177247c1-763d-4d0c-81ba-f538937f0008") : configmap "swift-ring-files" not found Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.619965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcprd\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-kube-api-access-jcprd\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.626430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/177247c1-763d-4d0c-81ba-f538937f0008-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.641479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcprd\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-kube-api-access-jcprd\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.664819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.977615 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-msjpr"] Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.979605 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.995763 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-msjpr"] Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.996237 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.996503 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 20:27:51 crc kubenswrapper[4858]: I0127 20:27:51.996692 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.040469 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-swiftconf\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.040605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-ring-data-devices\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.040675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvgcc\" (UniqueName: \"kubernetes.io/projected/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-kube-api-access-bvgcc\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.040711 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-scripts\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.040730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-etc-swift\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.040788 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-dispersionconf\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.040826 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-combined-ca-bundle\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.083117 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6428c38a-0174-4a1d-b83a-afca9a457b08" path="/var/lib/kubelet/pods/6428c38a-0174-4a1d-b83a-afca9a457b08/volumes" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-dispersionconf\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-combined-ca-bundle\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143240 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-swiftconf\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-ring-data-devices\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143320 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvgcc\" (UniqueName: \"kubernetes.io/projected/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-kube-api-access-bvgcc\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-scripts\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-etc-swift\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.143414 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.145812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-etc-swift\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.146964 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-scripts\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.150161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-dispersionconf\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: E0127 20:27:52.150640 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 20:27:52 crc kubenswrapper[4858]: E0127 20:27:52.150691 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 20:27:52 crc kubenswrapper[4858]: E0127 20:27:52.150763 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift podName:177247c1-763d-4d0c-81ba-f538937f0008 nodeName:}" failed. No retries permitted until 2026-01-27 20:27:53.150738882 +0000 UTC m=+1217.858554758 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift") pod "swift-storage-0" (UID: "177247c1-763d-4d0c-81ba-f538937f0008") : configmap "swift-ring-files" not found Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.151619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-combined-ca-bundle\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.151779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-ring-data-devices\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.152108 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-swiftconf\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.171987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvgcc\" (UniqueName: \"kubernetes.io/projected/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-kube-api-access-bvgcc\") pod \"swift-ring-rebalance-msjpr\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.338195 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.373628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jc5cc" event={"ID":"d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa","Type":"ContainerStarted","Data":"4229fffb8963376c7eddb5ba58a5e805ddf04212aa5266124b950f2d3760077a"} Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.373857 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jc5cc" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.412351 4858 generic.go:334] "Generic (PLEG): container finished" podID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerID="0340505a9d27a8f05c02d5caa376e25c316e5cf186aaac16586595b145468313" exitCode=0 Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.413034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" event={"ID":"a60150fc-ab94-47ae-b1e8-247c240f0995","Type":"ContainerDied","Data":"0340505a9d27a8f05c02d5caa376e25c316e5cf186aaac16586595b145468313"} Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.461473 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jc5cc" podStartSLOduration=8.758456455 podStartE2EDuration="39.461416576s" podCreationTimestamp="2026-01-27 20:27:13 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.517899819 +0000 UTC m=+1185.225715525" lastFinishedPulling="2026-01-27 20:27:51.22085995 +0000 UTC m=+1215.928675646" observedRunningTime="2026-01-27 20:27:52.420901409 +0000 UTC m=+1217.128717125" watchObservedRunningTime="2026-01-27 20:27:52.461416576 +0000 UTC m=+1217.169232282" Jan 27 20:27:52 crc kubenswrapper[4858]: I0127 20:27:52.915898 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-msjpr"] Jan 27 20:27:52 crc kubenswrapper[4858]: W0127 20:27:52.929696 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode95660bd_4df7_4b1f_8dd1_8183870d0c8e.slice/crio-91d80ee3d760f8dc25e3dc34e62c77d40f46d23c9cd147834141ff510fbe86cf WatchSource:0}: Error finding container 91d80ee3d760f8dc25e3dc34e62c77d40f46d23c9cd147834141ff510fbe86cf: Status 404 returned error can't find the container with id 91d80ee3d760f8dc25e3dc34e62c77d40f46d23c9cd147834141ff510fbe86cf Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.174194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:53 crc kubenswrapper[4858]: E0127 20:27:53.174419 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 20:27:53 crc kubenswrapper[4858]: E0127 20:27:53.174438 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 20:27:53 crc kubenswrapper[4858]: E0127 20:27:53.174500 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift podName:177247c1-763d-4d0c-81ba-f538937f0008 nodeName:}" failed. No retries permitted until 2026-01-27 20:27:55.174479372 +0000 UTC m=+1219.882295088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift") pod "swift-storage-0" (UID: "177247c1-763d-4d0c-81ba-f538937f0008") : configmap "swift-ring-files" not found Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.425193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-msjpr" event={"ID":"e95660bd-4df7-4b1f-8dd1-8183870d0c8e","Type":"ContainerStarted","Data":"91d80ee3d760f8dc25e3dc34e62c77d40f46d23c9cd147834141ff510fbe86cf"} Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.435614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2c5e060-865d-405e-937d-1450a1928f49","Type":"ContainerStarted","Data":"1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1"} Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.436498 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.442886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" event={"ID":"a60150fc-ab94-47ae-b1e8-247c240f0995","Type":"ContainerStarted","Data":"141cfd907442ce40bae896f1310659ad45760b2c589d9a925d18147b8d550ee4"} Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.443407 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.471536 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.418988747 podStartE2EDuration="44.471510629s" podCreationTimestamp="2026-01-27 20:27:09 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.495351457 +0000 UTC m=+1185.203167163" lastFinishedPulling="2026-01-27 20:27:52.547873339 +0000 UTC m=+1217.255689045" observedRunningTime="2026-01-27 20:27:53.455800716 +0000 UTC m=+1218.163616432" watchObservedRunningTime="2026-01-27 20:27:53.471510629 +0000 UTC m=+1218.179326355" Jan 27 20:27:53 crc kubenswrapper[4858]: I0127 20:27:53.486457 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" podStartSLOduration=3.486434761 podStartE2EDuration="3.486434761s" podCreationTimestamp="2026-01-27 20:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:27:53.479441148 +0000 UTC m=+1218.187256864" watchObservedRunningTime="2026-01-27 20:27:53.486434761 +0000 UTC m=+1218.194250467" Jan 27 20:27:54 crc kubenswrapper[4858]: I0127 20:27:54.457295 4858 generic.go:334] "Generic (PLEG): container finished" podID="4768f41e-8ff0-4cec-b741-75f8902eb0e8" containerID="8905bb0cea73ea7102aa2836d3b542c9c44b028b39f0d24a17dd6c8ebf53bb9e" exitCode=0 Jan 27 20:27:54 crc kubenswrapper[4858]: I0127 20:27:54.457701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4768f41e-8ff0-4cec-b741-75f8902eb0e8","Type":"ContainerDied","Data":"8905bb0cea73ea7102aa2836d3b542c9c44b028b39f0d24a17dd6c8ebf53bb9e"} Jan 27 20:27:55 crc kubenswrapper[4858]: I0127 20:27:55.233289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:55 crc kubenswrapper[4858]: E0127 20:27:55.233536 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 20:27:55 crc kubenswrapper[4858]: E0127 20:27:55.233595 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 20:27:55 crc kubenswrapper[4858]: E0127 20:27:55.233676 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift podName:177247c1-763d-4d0c-81ba-f538937f0008 nodeName:}" failed. No retries permitted until 2026-01-27 20:27:59.233652873 +0000 UTC m=+1223.941468579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift") pod "swift-storage-0" (UID: "177247c1-763d-4d0c-81ba-f538937f0008") : configmap "swift-ring-files" not found Jan 27 20:27:55 crc kubenswrapper[4858]: I0127 20:27:55.469060 4858 generic.go:334] "Generic (PLEG): container finished" podID="f7f223cd-763c-408e-a3cf-067af57416af" containerID="c14df395bc87bb073e216790ac1de5bb9363c7f4cfcd612b98a7823758bdd431" exitCode=0 Jan 27 20:27:55 crc kubenswrapper[4858]: I0127 20:27:55.469113 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f7f223cd-763c-408e-a3cf-067af57416af","Type":"ContainerDied","Data":"c14df395bc87bb073e216790ac1de5bb9363c7f4cfcd612b98a7823758bdd431"} Jan 27 20:27:59 crc kubenswrapper[4858]: I0127 20:27:59.235967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:27:59 crc kubenswrapper[4858]: E0127 20:27:59.236198 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 20:27:59 crc kubenswrapper[4858]: E0127 20:27:59.236888 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 20:27:59 crc kubenswrapper[4858]: E0127 20:27:59.236996 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift podName:177247c1-763d-4d0c-81ba-f538937f0008 nodeName:}" failed. No retries permitted until 2026-01-27 20:28:07.236955644 +0000 UTC m=+1231.944771370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift") pod "swift-storage-0" (UID: "177247c1-763d-4d0c-81ba-f538937f0008") : configmap "swift-ring-files" not found Jan 27 20:28:00 crc kubenswrapper[4858]: I0127 20:28:00.131019 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 20:28:00 crc kubenswrapper[4858]: I0127 20:28:00.565731 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:28:00 crc kubenswrapper[4858]: I0127 20:28:00.630582 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69ccbdd8fc-rk8bc"] Jan 27 20:28:00 crc kubenswrapper[4858]: I0127 20:28:00.630903 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" podUID="853aaea7-10f1-421e-9ed2-e022a635e124" containerName="dnsmasq-dns" containerID="cri-o://988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b" gracePeriod=10 Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.165522 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.286296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx5pj\" (UniqueName: \"kubernetes.io/projected/853aaea7-10f1-421e-9ed2-e022a635e124-kube-api-access-gx5pj\") pod \"853aaea7-10f1-421e-9ed2-e022a635e124\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.286375 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-nb\") pod \"853aaea7-10f1-421e-9ed2-e022a635e124\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.286460 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-dns-svc\") pod \"853aaea7-10f1-421e-9ed2-e022a635e124\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.286512 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-sb\") pod \"853aaea7-10f1-421e-9ed2-e022a635e124\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.286665 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-config\") pod \"853aaea7-10f1-421e-9ed2-e022a635e124\" (UID: \"853aaea7-10f1-421e-9ed2-e022a635e124\") " Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.294892 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853aaea7-10f1-421e-9ed2-e022a635e124-kube-api-access-gx5pj" (OuterVolumeSpecName: "kube-api-access-gx5pj") pod "853aaea7-10f1-421e-9ed2-e022a635e124" (UID: "853aaea7-10f1-421e-9ed2-e022a635e124"). InnerVolumeSpecName "kube-api-access-gx5pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.373561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "853aaea7-10f1-421e-9ed2-e022a635e124" (UID: "853aaea7-10f1-421e-9ed2-e022a635e124"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.388927 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx5pj\" (UniqueName: \"kubernetes.io/projected/853aaea7-10f1-421e-9ed2-e022a635e124-kube-api-access-gx5pj\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.388993 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.389976 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "853aaea7-10f1-421e-9ed2-e022a635e124" (UID: "853aaea7-10f1-421e-9ed2-e022a635e124"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.401930 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-config" (OuterVolumeSpecName: "config") pod "853aaea7-10f1-421e-9ed2-e022a635e124" (UID: "853aaea7-10f1-421e-9ed2-e022a635e124"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.403232 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "853aaea7-10f1-421e-9ed2-e022a635e124" (UID: "853aaea7-10f1-421e-9ed2-e022a635e124"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.490712 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.490755 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.490765 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/853aaea7-10f1-421e-9ed2-e022a635e124-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.526588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f7f223cd-763c-408e-a3cf-067af57416af","Type":"ContainerStarted","Data":"9d676c2ac59be4bf198d95b5f090a1912bef8459d69a47236569069e028970bd"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.529263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-msjpr" event={"ID":"e95660bd-4df7-4b1f-8dd1-8183870d0c8e","Type":"ContainerStarted","Data":"54937daf06ddb0abe003b21866c6abeee8577431e7a61bf472052bd050626e7e"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.533678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b049e044-9171-4011-9c90-c334fa955321","Type":"ContainerStarted","Data":"14a7e1b4180c227828e69d76b23c2bc63c8f42eb921e6b0ce3008db70d63ed79"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.536066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"13a7d533-55e2-4072-add8-4cd41613da8a","Type":"ContainerStarted","Data":"f61bb75595abc2c43e3d992d595f3afad011d367e9ebb18ee21a0ac471308953"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.537865 4858 generic.go:334] "Generic (PLEG): container finished" podID="853aaea7-10f1-421e-9ed2-e022a635e124" containerID="988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b" exitCode=0 Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.537912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" event={"ID":"853aaea7-10f1-421e-9ed2-e022a635e124","Type":"ContainerDied","Data":"988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.537932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" event={"ID":"853aaea7-10f1-421e-9ed2-e022a635e124","Type":"ContainerDied","Data":"59f2ed29e8ce91b84bd04fade37b367c8a4276ea703ea689fc8f2e46e964a3a7"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.537952 4858 scope.go:117] "RemoveContainer" containerID="988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.538058 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69ccbdd8fc-rk8bc" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.545142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4768f41e-8ff0-4cec-b741-75f8902eb0e8","Type":"ContainerStarted","Data":"21e9928992d17e14caa5ca1cff33a5677d919670677b3e8b84157d686e5c6b61"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.547262 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerStarted","Data":"7a46157259e5e9d82e9db91fb5da218a22d65c3c0c118df0058647a983c7151c"} Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.554313 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371981.300491 podStartE2EDuration="55.554284482s" podCreationTimestamp="2026-01-27 20:27:06 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.415089365 +0000 UTC m=+1185.122905071" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:01.550669932 +0000 UTC m=+1226.258485648" watchObservedRunningTime="2026-01-27 20:28:01.554284482 +0000 UTC m=+1226.262100188" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.576600 4858 scope.go:117] "RemoveContainer" containerID="6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.591481 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-msjpr" podStartSLOduration=2.841966331 podStartE2EDuration="10.591460167s" podCreationTimestamp="2026-01-27 20:27:51 +0000 UTC" firstStartedPulling="2026-01-27 20:27:52.93608445 +0000 UTC m=+1217.643900156" lastFinishedPulling="2026-01-27 20:28:00.685578286 +0000 UTC m=+1225.393393992" observedRunningTime="2026-01-27 20:28:01.5825223 +0000 UTC m=+1226.290338006" watchObservedRunningTime="2026-01-27 20:28:01.591460167 +0000 UTC m=+1226.299275873" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.609035 4858 scope.go:117] "RemoveContainer" containerID="988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b" Jan 27 20:28:01 crc kubenswrapper[4858]: E0127 20:28:01.609684 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b\": container with ID starting with 988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b not found: ID does not exist" containerID="988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.609716 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b"} err="failed to get container status \"988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b\": rpc error: code = NotFound desc = could not find container \"988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b\": container with ID starting with 988c4d19bb83dc260fca13d64845d5095c1246f000da21f2aabc212494907d6b not found: ID does not exist" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.609739 4858 scope.go:117] "RemoveContainer" containerID="6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c" Jan 27 20:28:01 crc kubenswrapper[4858]: E0127 20:28:01.610188 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c\": container with ID starting with 6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c not found: ID does not exist" containerID="6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.610442 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c"} err="failed to get container status \"6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c\": rpc error: code = NotFound desc = could not find container \"6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c\": container with ID starting with 6d68197f5a963d6686fbb3c1497eb83bfd87c14cf3618b26e05289f1ea07d94c not found: ID does not exist" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.619208 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=29.23891892 podStartE2EDuration="56.619176321s" podCreationTimestamp="2026-01-27 20:27:05 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.496261472 +0000 UTC m=+1185.204077178" lastFinishedPulling="2026-01-27 20:27:47.876518873 +0000 UTC m=+1212.584334579" observedRunningTime="2026-01-27 20:28:01.616801355 +0000 UTC m=+1226.324617061" watchObservedRunningTime="2026-01-27 20:28:01.619176321 +0000 UTC m=+1226.326992027" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.648896 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.95452496 podStartE2EDuration="48.64887519s" podCreationTimestamp="2026-01-27 20:27:13 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.994493956 +0000 UTC m=+1185.702309672" lastFinishedPulling="2026-01-27 20:28:00.688844206 +0000 UTC m=+1225.396659902" observedRunningTime="2026-01-27 20:28:01.643381948 +0000 UTC m=+1226.351197664" watchObservedRunningTime="2026-01-27 20:28:01.64887519 +0000 UTC m=+1226.356690896" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.666667 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.169772188 podStartE2EDuration="45.666646099s" podCreationTimestamp="2026-01-27 20:27:16 +0000 UTC" firstStartedPulling="2026-01-27 20:27:22.19287648 +0000 UTC m=+1186.900692186" lastFinishedPulling="2026-01-27 20:28:00.689750391 +0000 UTC m=+1225.397566097" observedRunningTime="2026-01-27 20:28:01.661756175 +0000 UTC m=+1226.369571881" watchObservedRunningTime="2026-01-27 20:28:01.666646099 +0000 UTC m=+1226.374461805" Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.685895 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69ccbdd8fc-rk8bc"] Jan 27 20:28:01 crc kubenswrapper[4858]: I0127 20:28:01.695879 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69ccbdd8fc-rk8bc"] Jan 27 20:28:02 crc kubenswrapper[4858]: I0127 20:28:02.082795 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853aaea7-10f1-421e-9ed2-e022a635e124" path="/var/lib/kubelet/pods/853aaea7-10f1-421e-9ed2-e022a635e124/volumes" Jan 27 20:28:02 crc kubenswrapper[4858]: I0127 20:28:02.566797 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 20:28:02 crc kubenswrapper[4858]: I0127 20:28:02.566899 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 20:28:02 crc kubenswrapper[4858]: I0127 20:28:02.923081 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 20:28:04 crc kubenswrapper[4858]: I0127 20:28:04.584758 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerStarted","Data":"3b3e39cc70a770b37ee336c30cd07551febee27fdb0ad58a94a144a8a68a0f1a"} Jan 27 20:28:04 crc kubenswrapper[4858]: I0127 20:28:04.923358 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 20:28:05 crc kubenswrapper[4858]: I0127 20:28:05.610598 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 20:28:05 crc kubenswrapper[4858]: I0127 20:28:05.977462 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 20:28:06 crc kubenswrapper[4858]: I0127 20:28:06.658689 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 20:28:06 crc kubenswrapper[4858]: I0127 20:28:06.659065 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 20:28:06 crc kubenswrapper[4858]: I0127 20:28:06.789743 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 20:28:07 crc kubenswrapper[4858]: I0127 20:28:07.313888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:28:07 crc kubenswrapper[4858]: E0127 20:28:07.314175 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 20:28:07 crc kubenswrapper[4858]: E0127 20:28:07.314462 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 20:28:07 crc kubenswrapper[4858]: E0127 20:28:07.314626 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift podName:177247c1-763d-4d0c-81ba-f538937f0008 nodeName:}" failed. No retries permitted until 2026-01-27 20:28:23.314581239 +0000 UTC m=+1248.022396985 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift") pod "swift-storage-0" (UID: "177247c1-763d-4d0c-81ba-f538937f0008") : configmap "swift-ring-files" not found Jan 27 20:28:07 crc kubenswrapper[4858]: I0127 20:28:07.616605 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 20:28:07 crc kubenswrapper[4858]: E0127 20:28:07.827122 4858 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.56:36348->38.129.56.56:42269: write tcp 38.129.56.56:36348->38.129.56.56:42269: write: broken pipe Jan 27 20:28:07 crc kubenswrapper[4858]: I0127 20:28:07.827518 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.116162 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.116774 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.194464 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8625f"] Jan 27 20:28:08 crc kubenswrapper[4858]: E0127 20:28:08.195389 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853aaea7-10f1-421e-9ed2-e022a635e124" containerName="dnsmasq-dns" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.195483 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="853aaea7-10f1-421e-9ed2-e022a635e124" containerName="dnsmasq-dns" Jan 27 20:28:08 crc kubenswrapper[4858]: E0127 20:28:08.195612 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853aaea7-10f1-421e-9ed2-e022a635e124" containerName="init" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.195693 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="853aaea7-10f1-421e-9ed2-e022a635e124" containerName="init" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.195986 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="853aaea7-10f1-421e-9ed2-e022a635e124" containerName="dnsmasq-dns" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.196838 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.208979 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3fe9-account-create-update-mvfhw"] Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.210390 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.212435 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.223959 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8625f"] Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.227278 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3fe9-account-create-update-mvfhw"] Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.300432 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-58c8-account-create-update-f2bwj"] Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.301925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.306803 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.324264 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58c8-account-create-update-f2bwj"] Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.350627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-operator-scripts\") pod \"keystone-3fe9-account-create-update-mvfhw\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.351021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqc4\" (UniqueName: \"kubernetes.io/projected/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-kube-api-access-bsqc4\") pod \"keystone-3fe9-account-create-update-mvfhw\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.351158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed125e14-c1dd-4edc-bf84-2e95b94afc30-operator-scripts\") pod \"placement-db-create-8625f\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.351273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq4vk\" (UniqueName: \"kubernetes.io/projected/ed125e14-c1dd-4edc-bf84-2e95b94afc30-kube-api-access-gq4vk\") pod \"placement-db-create-8625f\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.453392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6049d236-ab47-40dd-845e-af928985d66b-operator-scripts\") pod \"placement-58c8-account-create-update-f2bwj\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.453509 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-operator-scripts\") pod \"keystone-3fe9-account-create-update-mvfhw\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.453528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsqc4\" (UniqueName: \"kubernetes.io/projected/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-kube-api-access-bsqc4\") pod \"keystone-3fe9-account-create-update-mvfhw\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.453586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed125e14-c1dd-4edc-bf84-2e95b94afc30-operator-scripts\") pod \"placement-db-create-8625f\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.453618 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq4vk\" (UniqueName: \"kubernetes.io/projected/ed125e14-c1dd-4edc-bf84-2e95b94afc30-kube-api-access-gq4vk\") pod \"placement-db-create-8625f\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.453670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfrn2\" (UniqueName: \"kubernetes.io/projected/6049d236-ab47-40dd-845e-af928985d66b-kube-api-access-xfrn2\") pod \"placement-58c8-account-create-update-f2bwj\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.454453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed125e14-c1dd-4edc-bf84-2e95b94afc30-operator-scripts\") pod \"placement-db-create-8625f\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.454452 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-operator-scripts\") pod \"keystone-3fe9-account-create-update-mvfhw\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.481059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq4vk\" (UniqueName: \"kubernetes.io/projected/ed125e14-c1dd-4edc-bf84-2e95b94afc30-kube-api-access-gq4vk\") pod \"placement-db-create-8625f\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.481229 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsqc4\" (UniqueName: \"kubernetes.io/projected/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-kube-api-access-bsqc4\") pod \"keystone-3fe9-account-create-update-mvfhw\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.555927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfrn2\" (UniqueName: \"kubernetes.io/projected/6049d236-ab47-40dd-845e-af928985d66b-kube-api-access-xfrn2\") pod \"placement-58c8-account-create-update-f2bwj\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.556417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6049d236-ab47-40dd-845e-af928985d66b-operator-scripts\") pod \"placement-58c8-account-create-update-f2bwj\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.557480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6049d236-ab47-40dd-845e-af928985d66b-operator-scripts\") pod \"placement-58c8-account-create-update-f2bwj\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.561322 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8625f" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.571299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.575692 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfrn2\" (UniqueName: \"kubernetes.io/projected/6049d236-ab47-40dd-845e-af928985d66b-kube-api-access-xfrn2\") pod \"placement-58c8-account-create-update-f2bwj\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.625716 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.626956 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerStarted","Data":"c3355a00a372c4f61092329a7c661fe3bafcea899cb27b421d0fe1905087f115"} Jan 27 20:28:08 crc kubenswrapper[4858]: I0127 20:28:08.685073 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=11.707704252 podStartE2EDuration="58.685050638s" podCreationTimestamp="2026-01-27 20:27:10 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.466729278 +0000 UTC m=+1185.174544984" lastFinishedPulling="2026-01-27 20:28:07.444075664 +0000 UTC m=+1232.151891370" observedRunningTime="2026-01-27 20:28:08.671533066 +0000 UTC m=+1233.379348772" watchObservedRunningTime="2026-01-27 20:28:08.685050638 +0000 UTC m=+1233.392866344" Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.278687 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58c8-account-create-update-f2bwj"] Jan 27 20:28:09 crc kubenswrapper[4858]: W0127 20:28:09.280817 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6049d236_ab47_40dd_845e_af928985d66b.slice/crio-73625262046a65df80df81b59e18e60e44ba77a82d4a4b5d3c3d09dc8d47ca39 WatchSource:0}: Error finding container 73625262046a65df80df81b59e18e60e44ba77a82d4a4b5d3c3d09dc8d47ca39: Status 404 returned error can't find the container with id 73625262046a65df80df81b59e18e60e44ba77a82d4a4b5d3c3d09dc8d47ca39 Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.392764 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3fe9-account-create-update-mvfhw"] Jan 27 20:28:09 crc kubenswrapper[4858]: W0127 20:28:09.397527 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dbaa1db_74f0_45f8_9f44_e27ebff3e89c.slice/crio-06f8336bee6a33f1a06031ea0ae8b6195d2f2f2ed927b482184007e6d02bfeaa WatchSource:0}: Error finding container 06f8336bee6a33f1a06031ea0ae8b6195d2f2f2ed927b482184007e6d02bfeaa: Status 404 returned error can't find the container with id 06f8336bee6a33f1a06031ea0ae8b6195d2f2f2ed927b482184007e6d02bfeaa Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.400480 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8625f"] Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.635685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58c8-account-create-update-f2bwj" event={"ID":"6049d236-ab47-40dd-845e-af928985d66b","Type":"ContainerStarted","Data":"e2f6744f033e6446a683b5bedbca7e1674c0929ebb4dbd684eed82f02443ade4"} Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.635737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58c8-account-create-update-f2bwj" event={"ID":"6049d236-ab47-40dd-845e-af928985d66b","Type":"ContainerStarted","Data":"73625262046a65df80df81b59e18e60e44ba77a82d4a4b5d3c3d09dc8d47ca39"} Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.639737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3fe9-account-create-update-mvfhw" event={"ID":"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c","Type":"ContainerStarted","Data":"fcd05f154137a5cd0cdef610c44dc0c175ad659c408b90d1885d43906b2bd0c3"} Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.639789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3fe9-account-create-update-mvfhw" event={"ID":"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c","Type":"ContainerStarted","Data":"06f8336bee6a33f1a06031ea0ae8b6195d2f2f2ed927b482184007e6d02bfeaa"} Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.642522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8625f" event={"ID":"ed125e14-c1dd-4edc-bf84-2e95b94afc30","Type":"ContainerStarted","Data":"040a3230385826025b96254b816b02b7948c029a8e16e3ad5475a5c352e86d94"} Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.642635 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8625f" event={"ID":"ed125e14-c1dd-4edc-bf84-2e95b94afc30","Type":"ContainerStarted","Data":"2067aaef5f3cc01dc1f231dc854608267ea86bf3fd8accab118d5fea3c09fea7"} Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.656362 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-58c8-account-create-update-f2bwj" podStartSLOduration=1.656333681 podStartE2EDuration="1.656333681s" podCreationTimestamp="2026-01-27 20:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:09.650239035 +0000 UTC m=+1234.358054751" watchObservedRunningTime="2026-01-27 20:28:09.656333681 +0000 UTC m=+1234.364149387" Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.676509 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-8625f" podStartSLOduration=1.6764832360000002 podStartE2EDuration="1.676483236s" podCreationTimestamp="2026-01-27 20:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:09.668402431 +0000 UTC m=+1234.376218147" watchObservedRunningTime="2026-01-27 20:28:09.676483236 +0000 UTC m=+1234.384298942" Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.686134 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3fe9-account-create-update-mvfhw" podStartSLOduration=1.686109535 podStartE2EDuration="1.686109535s" podCreationTimestamp="2026-01-27 20:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:09.682611343 +0000 UTC m=+1234.390427059" watchObservedRunningTime="2026-01-27 20:28:09.686109535 +0000 UTC m=+1234.393925241" Jan 27 20:28:09 crc kubenswrapper[4858]: I0127 20:28:09.967457 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.188115 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.190355 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.214176 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9btl8" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.214742 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.214750 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.214914 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.295160 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.318784 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-config\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.319341 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.319518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvms5\" (UniqueName: \"kubernetes.io/projected/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-kube-api-access-vvms5\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.319566 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-scripts\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.319593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.319624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.319685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.321075 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-4bp6j"] Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.322944 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.334001 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-4bp6j"] Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.377467 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-9080-account-create-update-j5tv4"] Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.378867 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.383635 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.386375 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9080-account-create-update-j5tv4"] Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec23204-a373-4fab-80be-43c45596f7e0-operator-scripts\") pod \"watcher-db-create-4bp6j\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7th\" (UniqueName: \"kubernetes.io/projected/1ec23204-a373-4fab-80be-43c45596f7e0-kube-api-access-bk7th\") pod \"watcher-db-create-4bp6j\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvms5\" (UniqueName: \"kubernetes.io/projected/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-kube-api-access-vvms5\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-scripts\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421923 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trxl4\" (UniqueName: \"kubernetes.io/projected/6772e04a-3e3e-427a-8e84-8979c1fe31af-kube-api-access-trxl4\") pod \"watcher-9080-account-create-update-j5tv4\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.421954 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6772e04a-3e3e-427a-8e84-8979c1fe31af-operator-scripts\") pod \"watcher-9080-account-create-update-j5tv4\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.422013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.422054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-config\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.422099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.423206 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.423693 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-config\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.423795 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-scripts\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.433701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.442774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.452404 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvms5\" (UniqueName: \"kubernetes.io/projected/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-kube-api-access-vvms5\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.453238 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c57afc3-0c88-46a8-ab70-332b1a43ee7f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c57afc3-0c88-46a8-ab70-332b1a43ee7f\") " pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.523408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6772e04a-3e3e-427a-8e84-8979c1fe31af-operator-scripts\") pod \"watcher-9080-account-create-update-j5tv4\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.523637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec23204-a373-4fab-80be-43c45596f7e0-operator-scripts\") pod \"watcher-db-create-4bp6j\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.523706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7th\" (UniqueName: \"kubernetes.io/projected/1ec23204-a373-4fab-80be-43c45596f7e0-kube-api-access-bk7th\") pod \"watcher-db-create-4bp6j\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.523766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trxl4\" (UniqueName: \"kubernetes.io/projected/6772e04a-3e3e-427a-8e84-8979c1fe31af-kube-api-access-trxl4\") pod \"watcher-9080-account-create-update-j5tv4\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.524357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec23204-a373-4fab-80be-43c45596f7e0-operator-scripts\") pod \"watcher-db-create-4bp6j\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.524457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6772e04a-3e3e-427a-8e84-8979c1fe31af-operator-scripts\") pod \"watcher-9080-account-create-update-j5tv4\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.545070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7th\" (UniqueName: \"kubernetes.io/projected/1ec23204-a373-4fab-80be-43c45596f7e0-kube-api-access-bk7th\") pod \"watcher-db-create-4bp6j\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.545223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trxl4\" (UniqueName: \"kubernetes.io/projected/6772e04a-3e3e-427a-8e84-8979c1fe31af-kube-api-access-trxl4\") pod \"watcher-9080-account-create-update-j5tv4\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.587475 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.595938 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.656713 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.669437 4858 generic.go:334] "Generic (PLEG): container finished" podID="e95660bd-4df7-4b1f-8dd1-8183870d0c8e" containerID="54937daf06ddb0abe003b21866c6abeee8577431e7a61bf472052bd050626e7e" exitCode=0 Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.669565 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-msjpr" event={"ID":"e95660bd-4df7-4b1f-8dd1-8183870d0c8e","Type":"ContainerDied","Data":"54937daf06ddb0abe003b21866c6abeee8577431e7a61bf472052bd050626e7e"} Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.677456 4858 generic.go:334] "Generic (PLEG): container finished" podID="9dbaa1db-74f0-45f8-9f44-e27ebff3e89c" containerID="fcd05f154137a5cd0cdef610c44dc0c175ad659c408b90d1885d43906b2bd0c3" exitCode=0 Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.677631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3fe9-account-create-update-mvfhw" event={"ID":"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c","Type":"ContainerDied","Data":"fcd05f154137a5cd0cdef610c44dc0c175ad659c408b90d1885d43906b2bd0c3"} Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.678985 4858 generic.go:334] "Generic (PLEG): container finished" podID="ed125e14-c1dd-4edc-bf84-2e95b94afc30" containerID="040a3230385826025b96254b816b02b7948c029a8e16e3ad5475a5c352e86d94" exitCode=0 Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.679040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8625f" event={"ID":"ed125e14-c1dd-4edc-bf84-2e95b94afc30","Type":"ContainerDied","Data":"040a3230385826025b96254b816b02b7948c029a8e16e3ad5475a5c352e86d94"} Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.681874 4858 generic.go:334] "Generic (PLEG): container finished" podID="6049d236-ab47-40dd-845e-af928985d66b" containerID="e2f6744f033e6446a683b5bedbca7e1674c0929ebb4dbd684eed82f02443ade4" exitCode=0 Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.681962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58c8-account-create-update-f2bwj" event={"ID":"6049d236-ab47-40dd-845e-af928985d66b","Type":"ContainerDied","Data":"e2f6744f033e6446a683b5bedbca7e1674c0929ebb4dbd684eed82f02443ade4"} Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.819123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:10 crc kubenswrapper[4858]: I0127 20:28:10.823954 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.203385 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.301175 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-4bp6j"] Jan 27 20:28:11 crc kubenswrapper[4858]: W0127 20:28:11.311937 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ec23204_a373_4fab_80be_43c45596f7e0.slice/crio-a04c718fbede9d4f1b91e50bcd96623ff3d6eb2ebfd36d5d08e157b149cdf525 WatchSource:0}: Error finding container a04c718fbede9d4f1b91e50bcd96623ff3d6eb2ebfd36d5d08e157b149cdf525: Status 404 returned error can't find the container with id a04c718fbede9d4f1b91e50bcd96623ff3d6eb2ebfd36d5d08e157b149cdf525 Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.404690 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9080-account-create-update-j5tv4"] Jan 27 20:28:11 crc kubenswrapper[4858]: W0127 20:28:11.415286 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6772e04a_3e3e_427a_8e84_8979c1fe31af.slice/crio-9d055a7564dbbf87c592937b2db52f0b5e5cf317523c3ad860a1f7c9b1d2b38d WatchSource:0}: Error finding container 9d055a7564dbbf87c592937b2db52f0b5e5cf317523c3ad860a1f7c9b1d2b38d: Status 404 returned error can't find the container with id 9d055a7564dbbf87c592937b2db52f0b5e5cf317523c3ad860a1f7c9b1d2b38d Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.512375 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.512789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.515624 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.698529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3c57afc3-0c88-46a8-ab70-332b1a43ee7f","Type":"ContainerStarted","Data":"d164ec4dd4ea3c6364ac4a52a2f78e2e3be5c4acc150c7c335064e07b7c022f1"} Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.704537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-4bp6j" event={"ID":"1ec23204-a373-4fab-80be-43c45596f7e0","Type":"ContainerStarted","Data":"ae03d626878e812a7b64adf5ef65f4c86026f3ddcfcc8475153ce6b1ead9d97e"} Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.704628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-4bp6j" event={"ID":"1ec23204-a373-4fab-80be-43c45596f7e0","Type":"ContainerStarted","Data":"a04c718fbede9d4f1b91e50bcd96623ff3d6eb2ebfd36d5d08e157b149cdf525"} Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.708434 4858 generic.go:334] "Generic (PLEG): container finished" podID="ad881410-229a-4427-862b-8febd0e5ab61" containerID="ca76863e730916538ae7127ed44c1cadecfed9ca3f49b484cc25424b7224480b" exitCode=0 Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.708588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad881410-229a-4427-862b-8febd0e5ab61","Type":"ContainerDied","Data":"ca76863e730916538ae7127ed44c1cadecfed9ca3f49b484cc25424b7224480b"} Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.711271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9080-account-create-update-j5tv4" event={"ID":"6772e04a-3e3e-427a-8e84-8979c1fe31af","Type":"ContainerStarted","Data":"4fc16d5d7be264ec15cff3b5a3064abb362e6b25e30ac9cc704b3e74b52ad512"} Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.711359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9080-account-create-update-j5tv4" event={"ID":"6772e04a-3e3e-427a-8e84-8979c1fe31af","Type":"ContainerStarted","Data":"9d055a7564dbbf87c592937b2db52f0b5e5cf317523c3ad860a1f7c9b1d2b38d"} Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.714085 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:11 crc kubenswrapper[4858]: I0127 20:28:11.727009 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-4bp6j" podStartSLOduration=1.7269824219999999 podStartE2EDuration="1.726982422s" podCreationTimestamp="2026-01-27 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:11.725168149 +0000 UTC m=+1236.432983875" watchObservedRunningTime="2026-01-27 20:28:11.726982422 +0000 UTC m=+1236.434798148" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.372986 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8625f" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.385115 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.401120 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-9080-account-create-update-j5tv4" podStartSLOduration=2.401096328 podStartE2EDuration="2.401096328s" podCreationTimestamp="2026-01-27 20:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:11.811573444 +0000 UTC m=+1236.519389150" watchObservedRunningTime="2026-01-27 20:28:12.401096328 +0000 UTC m=+1237.108912024" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.417466 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484111 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-ring-data-devices\") pod \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484206 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-combined-ca-bundle\") pod \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed125e14-c1dd-4edc-bf84-2e95b94afc30-operator-scripts\") pod \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-scripts\") pod \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484314 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq4vk\" (UniqueName: \"kubernetes.io/projected/ed125e14-c1dd-4edc-bf84-2e95b94afc30-kube-api-access-gq4vk\") pod \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\" (UID: \"ed125e14-c1dd-4edc-bf84-2e95b94afc30\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484387 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-swiftconf\") pod \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484446 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-etc-swift\") pod \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484485 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-dispersionconf\") pod \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484662 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvgcc\" (UniqueName: \"kubernetes.io/projected/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-kube-api-access-bvgcc\") pod \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\" (UID: \"e95660bd-4df7-4b1f-8dd1-8183870d0c8e\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.484893 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e95660bd-4df7-4b1f-8dd1-8183870d0c8e" (UID: "e95660bd-4df7-4b1f-8dd1-8183870d0c8e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.485249 4858 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.487534 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed125e14-c1dd-4edc-bf84-2e95b94afc30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed125e14-c1dd-4edc-bf84-2e95b94afc30" (UID: "ed125e14-c1dd-4edc-bf84-2e95b94afc30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.489322 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e95660bd-4df7-4b1f-8dd1-8183870d0c8e" (UID: "e95660bd-4df7-4b1f-8dd1-8183870d0c8e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.490819 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.491195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-kube-api-access-bvgcc" (OuterVolumeSpecName: "kube-api-access-bvgcc") pod "e95660bd-4df7-4b1f-8dd1-8183870d0c8e" (UID: "e95660bd-4df7-4b1f-8dd1-8183870d0c8e"). InnerVolumeSpecName "kube-api-access-bvgcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.491324 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed125e14-c1dd-4edc-bf84-2e95b94afc30-kube-api-access-gq4vk" (OuterVolumeSpecName: "kube-api-access-gq4vk") pod "ed125e14-c1dd-4edc-bf84-2e95b94afc30" (UID: "ed125e14-c1dd-4edc-bf84-2e95b94afc30"). InnerVolumeSpecName "kube-api-access-gq4vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.499403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e95660bd-4df7-4b1f-8dd1-8183870d0c8e" (UID: "e95660bd-4df7-4b1f-8dd1-8183870d0c8e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.515648 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-scripts" (OuterVolumeSpecName: "scripts") pod "e95660bd-4df7-4b1f-8dd1-8183870d0c8e" (UID: "e95660bd-4df7-4b1f-8dd1-8183870d0c8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.519865 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e95660bd-4df7-4b1f-8dd1-8183870d0c8e" (UID: "e95660bd-4df7-4b1f-8dd1-8183870d0c8e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.526937 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e95660bd-4df7-4b1f-8dd1-8183870d0c8e" (UID: "e95660bd-4df7-4b1f-8dd1-8183870d0c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.587949 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsqc4\" (UniqueName: \"kubernetes.io/projected/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-kube-api-access-bsqc4\") pod \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.588039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-operator-scripts\") pod \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\" (UID: \"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.588275 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfrn2\" (UniqueName: \"kubernetes.io/projected/6049d236-ab47-40dd-845e-af928985d66b-kube-api-access-xfrn2\") pod \"6049d236-ab47-40dd-845e-af928985d66b\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.588302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6049d236-ab47-40dd-845e-af928985d66b-operator-scripts\") pod \"6049d236-ab47-40dd-845e-af928985d66b\" (UID: \"6049d236-ab47-40dd-845e-af928985d66b\") " Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.589542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9dbaa1db-74f0-45f8-9f44-e27ebff3e89c" (UID: "9dbaa1db-74f0-45f8-9f44-e27ebff3e89c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.590534 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6049d236-ab47-40dd-845e-af928985d66b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6049d236-ab47-40dd-845e-af928985d66b" (UID: "6049d236-ab47-40dd-845e-af928985d66b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.591325 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.591659 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.591763 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed125e14-c1dd-4edc-bf84-2e95b94afc30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.591836 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.591922 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq4vk\" (UniqueName: \"kubernetes.io/projected/ed125e14-c1dd-4edc-bf84-2e95b94afc30-kube-api-access-gq4vk\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.592231 4858 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.592440 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.592529 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6049d236-ab47-40dd-845e-af928985d66b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.592625 4858 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.592709 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvgcc\" (UniqueName: \"kubernetes.io/projected/e95660bd-4df7-4b1f-8dd1-8183870d0c8e-kube-api-access-bvgcc\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.593915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6049d236-ab47-40dd-845e-af928985d66b-kube-api-access-xfrn2" (OuterVolumeSpecName: "kube-api-access-xfrn2") pod "6049d236-ab47-40dd-845e-af928985d66b" (UID: "6049d236-ab47-40dd-845e-af928985d66b"). InnerVolumeSpecName "kube-api-access-xfrn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.593987 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-kube-api-access-bsqc4" (OuterVolumeSpecName: "kube-api-access-bsqc4") pod "9dbaa1db-74f0-45f8-9f44-e27ebff3e89c" (UID: "9dbaa1db-74f0-45f8-9f44-e27ebff3e89c"). InnerVolumeSpecName "kube-api-access-bsqc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.694528 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfrn2\" (UniqueName: \"kubernetes.io/projected/6049d236-ab47-40dd-845e-af928985d66b-kube-api-access-xfrn2\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.694605 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsqc4\" (UniqueName: \"kubernetes.io/projected/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c-kube-api-access-bsqc4\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.727974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3fe9-account-create-update-mvfhw" event={"ID":"9dbaa1db-74f0-45f8-9f44-e27ebff3e89c","Type":"ContainerDied","Data":"06f8336bee6a33f1a06031ea0ae8b6195d2f2f2ed927b482184007e6d02bfeaa"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.728670 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06f8336bee6a33f1a06031ea0ae8b6195d2f2f2ed927b482184007e6d02bfeaa" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.728342 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3fe9-account-create-update-mvfhw" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.730124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8625f" event={"ID":"ed125e14-c1dd-4edc-bf84-2e95b94afc30","Type":"ContainerDied","Data":"2067aaef5f3cc01dc1f231dc854608267ea86bf3fd8accab118d5fea3c09fea7"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.730174 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2067aaef5f3cc01dc1f231dc854608267ea86bf3fd8accab118d5fea3c09fea7" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.730230 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8625f" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.741657 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58c8-account-create-update-f2bwj" event={"ID":"6049d236-ab47-40dd-845e-af928985d66b","Type":"ContainerDied","Data":"73625262046a65df80df81b59e18e60e44ba77a82d4a4b5d3c3d09dc8d47ca39"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.741936 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73625262046a65df80df81b59e18e60e44ba77a82d4a4b5d3c3d09dc8d47ca39" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.742281 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58c8-account-create-update-f2bwj" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.747998 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3c57afc3-0c88-46a8-ab70-332b1a43ee7f","Type":"ContainerStarted","Data":"18699ebae0e628145c6e0d00e6d14b9f2658558aeec7f455ed2ae31885068c52"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.755870 4858 generic.go:334] "Generic (PLEG): container finished" podID="1ec23204-a373-4fab-80be-43c45596f7e0" containerID="ae03d626878e812a7b64adf5ef65f4c86026f3ddcfcc8475153ce6b1ead9d97e" exitCode=0 Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.756646 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-4bp6j" event={"ID":"1ec23204-a373-4fab-80be-43c45596f7e0","Type":"ContainerDied","Data":"ae03d626878e812a7b64adf5ef65f4c86026f3ddcfcc8475153ce6b1ead9d97e"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.764772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad881410-229a-4427-862b-8febd0e5ab61","Type":"ContainerStarted","Data":"bbe9d09c4a05d23f65ac7dfee7aee83648557a8f6299c8b7af77c260fc6b0d14"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.765444 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.782853 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-msjpr" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.783416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-msjpr" event={"ID":"e95660bd-4df7-4b1f-8dd1-8183870d0c8e","Type":"ContainerDied","Data":"91d80ee3d760f8dc25e3dc34e62c77d40f46d23c9cd147834141ff510fbe86cf"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.783628 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d80ee3d760f8dc25e3dc34e62c77d40f46d23c9cd147834141ff510fbe86cf" Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.792600 4858 generic.go:334] "Generic (PLEG): container finished" podID="6772e04a-3e3e-427a-8e84-8979c1fe31af" containerID="4fc16d5d7be264ec15cff3b5a3064abb362e6b25e30ac9cc704b3e74b52ad512" exitCode=0 Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.793465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9080-account-create-update-j5tv4" event={"ID":"6772e04a-3e3e-427a-8e84-8979c1fe31af","Type":"ContainerDied","Data":"4fc16d5d7be264ec15cff3b5a3064abb362e6b25e30ac9cc704b3e74b52ad512"} Jan 27 20:28:12 crc kubenswrapper[4858]: I0127 20:28:12.822179 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.922360528 podStartE2EDuration="1m9.822153918s" podCreationTimestamp="2026-01-27 20:27:03 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.477075533 +0000 UTC m=+1185.184891239" lastFinishedPulling="2026-01-27 20:27:36.376868923 +0000 UTC m=+1201.084684629" observedRunningTime="2026-01-27 20:28:12.806567616 +0000 UTC m=+1237.514383332" watchObservedRunningTime="2026-01-27 20:28:12.822153918 +0000 UTC m=+1237.529969614" Jan 27 20:28:13 crc kubenswrapper[4858]: E0127 20:28:13.003805 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded125e14_c1dd_4edc_bf84_2e95b94afc30.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6049d236_ab47_40dd_845e_af928985d66b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded125e14_c1dd_4edc_bf84_2e95b94afc30.slice/crio-2067aaef5f3cc01dc1f231dc854608267ea86bf3fd8accab118d5fea3c09fea7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode95660bd_4df7_4b1f_8dd1_8183870d0c8e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dbaa1db_74f0_45f8_9f44_e27ebff3e89c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 20:28:13 crc kubenswrapper[4858]: I0127 20:28:13.805474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3c57afc3-0c88-46a8-ab70-332b1a43ee7f","Type":"ContainerStarted","Data":"f90dce31cae9626b23b24d7b7e0e43a672b88efc411158fc3eaaaab0a95fb62e"} Jan 27 20:28:13 crc kubenswrapper[4858]: I0127 20:28:13.841466 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.838413218 podStartE2EDuration="3.841437432s" podCreationTimestamp="2026-01-27 20:28:10 +0000 UTC" firstStartedPulling="2026-01-27 20:28:11.211853625 +0000 UTC m=+1235.919669331" lastFinishedPulling="2026-01-27 20:28:12.214877829 +0000 UTC m=+1236.922693545" observedRunningTime="2026-01-27 20:28:13.834194592 +0000 UTC m=+1238.542010308" watchObservedRunningTime="2026-01-27 20:28:13.841437432 +0000 UTC m=+1238.549253128" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.048223 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.348794 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.355224 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.446236 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trxl4\" (UniqueName: \"kubernetes.io/projected/6772e04a-3e3e-427a-8e84-8979c1fe31af-kube-api-access-trxl4\") pod \"6772e04a-3e3e-427a-8e84-8979c1fe31af\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.446296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec23204-a373-4fab-80be-43c45596f7e0-operator-scripts\") pod \"1ec23204-a373-4fab-80be-43c45596f7e0\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.446583 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk7th\" (UniqueName: \"kubernetes.io/projected/1ec23204-a373-4fab-80be-43c45596f7e0-kube-api-access-bk7th\") pod \"1ec23204-a373-4fab-80be-43c45596f7e0\" (UID: \"1ec23204-a373-4fab-80be-43c45596f7e0\") " Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.446722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6772e04a-3e3e-427a-8e84-8979c1fe31af-operator-scripts\") pod \"6772e04a-3e3e-427a-8e84-8979c1fe31af\" (UID: \"6772e04a-3e3e-427a-8e84-8979c1fe31af\") " Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.447640 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6772e04a-3e3e-427a-8e84-8979c1fe31af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6772e04a-3e3e-427a-8e84-8979c1fe31af" (UID: "6772e04a-3e3e-427a-8e84-8979c1fe31af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.449100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec23204-a373-4fab-80be-43c45596f7e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ec23204-a373-4fab-80be-43c45596f7e0" (UID: "1ec23204-a373-4fab-80be-43c45596f7e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.464906 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec23204-a373-4fab-80be-43c45596f7e0-kube-api-access-bk7th" (OuterVolumeSpecName: "kube-api-access-bk7th") pod "1ec23204-a373-4fab-80be-43c45596f7e0" (UID: "1ec23204-a373-4fab-80be-43c45596f7e0"). InnerVolumeSpecName "kube-api-access-bk7th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.468034 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6772e04a-3e3e-427a-8e84-8979c1fe31af-kube-api-access-trxl4" (OuterVolumeSpecName: "kube-api-access-trxl4") pod "6772e04a-3e3e-427a-8e84-8979c1fe31af" (UID: "6772e04a-3e3e-427a-8e84-8979c1fe31af"). InnerVolumeSpecName "kube-api-access-trxl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.548949 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk7th\" (UniqueName: \"kubernetes.io/projected/1ec23204-a373-4fab-80be-43c45596f7e0-kube-api-access-bk7th\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.549015 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6772e04a-3e3e-427a-8e84-8979c1fe31af-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.549025 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trxl4\" (UniqueName: \"kubernetes.io/projected/6772e04a-3e3e-427a-8e84-8979c1fe31af-kube-api-access-trxl4\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.549034 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ec23204-a373-4fab-80be-43c45596f7e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.822711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-4bp6j" event={"ID":"1ec23204-a373-4fab-80be-43c45596f7e0","Type":"ContainerDied","Data":"a04c718fbede9d4f1b91e50bcd96623ff3d6eb2ebfd36d5d08e157b149cdf525"} Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.823105 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a04c718fbede9d4f1b91e50bcd96623ff3d6eb2ebfd36d5d08e157b149cdf525" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.823181 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4bp6j" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.831907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9080-account-create-update-j5tv4" event={"ID":"6772e04a-3e3e-427a-8e84-8979c1fe31af","Type":"ContainerDied","Data":"9d055a7564dbbf87c592937b2db52f0b5e5cf317523c3ad860a1f7c9b1d2b38d"} Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.831959 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d055a7564dbbf87c592937b2db52f0b5e5cf317523c3ad860a1f7c9b1d2b38d" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.832033 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="prometheus" containerID="cri-o://7a46157259e5e9d82e9db91fb5da218a22d65c3c0c118df0058647a983c7151c" gracePeriod=600 Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.832109 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9080-account-create-update-j5tv4" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.832132 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="config-reloader" containerID="cri-o://3b3e39cc70a770b37ee336c30cd07551febee27fdb0ad58a94a144a8a68a0f1a" gracePeriod=600 Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.832934 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 20:28:14 crc kubenswrapper[4858]: I0127 20:28:14.837269 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="thanos-sidecar" containerID="cri-o://c3355a00a372c4f61092329a7c661fe3bafcea899cb27b421d0fe1905087f115" gracePeriod=600 Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.270151 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kdm4j"] Jan 27 20:28:15 crc kubenswrapper[4858]: E0127 20:28:15.271934 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec23204-a373-4fab-80be-43c45596f7e0" containerName="mariadb-database-create" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.272016 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec23204-a373-4fab-80be-43c45596f7e0" containerName="mariadb-database-create" Jan 27 20:28:15 crc kubenswrapper[4858]: E0127 20:28:15.272100 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed125e14-c1dd-4edc-bf84-2e95b94afc30" containerName="mariadb-database-create" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.272151 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed125e14-c1dd-4edc-bf84-2e95b94afc30" containerName="mariadb-database-create" Jan 27 20:28:15 crc kubenswrapper[4858]: E0127 20:28:15.272230 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95660bd-4df7-4b1f-8dd1-8183870d0c8e" containerName="swift-ring-rebalance" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.272282 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95660bd-4df7-4b1f-8dd1-8183870d0c8e" containerName="swift-ring-rebalance" Jan 27 20:28:15 crc kubenswrapper[4858]: E0127 20:28:15.272342 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6049d236-ab47-40dd-845e-af928985d66b" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.272397 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6049d236-ab47-40dd-845e-af928985d66b" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: E0127 20:28:15.272462 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbaa1db-74f0-45f8-9f44-e27ebff3e89c" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.272518 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbaa1db-74f0-45f8-9f44-e27ebff3e89c" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: E0127 20:28:15.272604 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6772e04a-3e3e-427a-8e84-8979c1fe31af" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.272659 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6772e04a-3e3e-427a-8e84-8979c1fe31af" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.272933 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6772e04a-3e3e-427a-8e84-8979c1fe31af" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.273013 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbaa1db-74f0-45f8-9f44-e27ebff3e89c" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.273074 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec23204-a373-4fab-80be-43c45596f7e0" containerName="mariadb-database-create" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.273144 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95660bd-4df7-4b1f-8dd1-8183870d0c8e" containerName="swift-ring-rebalance" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.273202 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6049d236-ab47-40dd-845e-af928985d66b" containerName="mariadb-account-create-update" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.273258 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed125e14-c1dd-4edc-bf84-2e95b94afc30" containerName="mariadb-database-create" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.274074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.280083 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.294034 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kdm4j"] Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.367164 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-operator-scripts\") pod \"root-account-create-update-kdm4j\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.367436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmgl\" (UniqueName: \"kubernetes.io/projected/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-kube-api-access-rwmgl\") pod \"root-account-create-update-kdm4j\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.469933 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmgl\" (UniqueName: \"kubernetes.io/projected/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-kube-api-access-rwmgl\") pod \"root-account-create-update-kdm4j\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.470079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-operator-scripts\") pod \"root-account-create-update-kdm4j\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.471872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-operator-scripts\") pod \"root-account-create-update-kdm4j\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.491367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmgl\" (UniqueName: \"kubernetes.io/projected/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-kube-api-access-rwmgl\") pod \"root-account-create-update-kdm4j\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.722175 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.849912 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerID="c3355a00a372c4f61092329a7c661fe3bafcea899cb27b421d0fe1905087f115" exitCode=0 Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.849960 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerID="3b3e39cc70a770b37ee336c30cd07551febee27fdb0ad58a94a144a8a68a0f1a" exitCode=0 Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.849977 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerID="7a46157259e5e9d82e9db91fb5da218a22d65c3c0c118df0058647a983c7151c" exitCode=0 Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.850011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerDied","Data":"c3355a00a372c4f61092329a7c661fe3bafcea899cb27b421d0fe1905087f115"} Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.850079 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerDied","Data":"3b3e39cc70a770b37ee336c30cd07551febee27fdb0ad58a94a144a8a68a0f1a"} Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.850092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerDied","Data":"7a46157259e5e9d82e9db91fb5da218a22d65c3c0c118df0058647a983c7151c"} Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.850104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aa9e0f34-290c-4297-b65b-2046ea8bd21d","Type":"ContainerDied","Data":"870ce57aa7b1ed8222f8ccc2b200f8df468376d3b7f6bb116352427d829698ef"} Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.850116 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870ce57aa7b1ed8222f8ccc2b200f8df468376d3b7f6bb116352427d829698ef" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.858143 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.980390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-web-config\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.980879 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-1\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.980930 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-2\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.980983 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config-out\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.981035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-thanos-prometheus-http-client-file\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.981186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.981274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qw6\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-kube-api-access-77qw6\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.981476 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.981506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-tls-assets\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.981622 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-0\") pod \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\" (UID: \"aa9e0f34-290c-4297-b65b-2046ea8bd21d\") " Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.990835 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.994764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.995355 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:15 crc kubenswrapper[4858]: I0127 20:28:15.998347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config" (OuterVolumeSpecName: "config") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.000748 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-kube-api-access-77qw6" (OuterVolumeSpecName: "kube-api-access-77qw6") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "kube-api-access-77qw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.002490 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.009123 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.012477 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config-out" (OuterVolumeSpecName: "config-out") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.020989 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "pvc-805dfc34-a393-4134-854b-f25365c0a015". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.057394 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-web-config" (OuterVolumeSpecName: "web-config") pod "aa9e0f34-290c-4297-b65b-2046ea8bd21d" (UID: "aa9e0f34-290c-4297-b65b-2046ea8bd21d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086376 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086430 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qw6\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-kube-api-access-77qw6\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086481 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") on node \"crc\" " Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086494 4858 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa9e0f34-290c-4297-b65b-2046ea8bd21d-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086507 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086517 4858 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086587 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086598 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/aa9e0f34-290c-4297-b65b-2046ea8bd21d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086606 4858 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa9e0f34-290c-4297-b65b-2046ea8bd21d-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.086618 4858 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa9e0f34-290c-4297-b65b-2046ea8bd21d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.115660 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.115826 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-805dfc34-a393-4134-854b-f25365c0a015" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015") on node "crc" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.188496 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.250219 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kdm4j"] Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.898098 4858 generic.go:334] "Generic (PLEG): container finished" podID="f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432" containerID="bebdb9af01ac798d06c48efe567b86b340a5b27ffc343d8d5385ba411cb0a5cc" exitCode=0 Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.898524 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.898778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kdm4j" event={"ID":"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432","Type":"ContainerDied","Data":"bebdb9af01ac798d06c48efe567b86b340a5b27ffc343d8d5385ba411cb0a5cc"} Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.898847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kdm4j" event={"ID":"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432","Type":"ContainerStarted","Data":"4fd21892313b2436f3605fd06e8d5ee3b0d21ece2037f29242fd8e3cc6e66819"} Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.954334 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.961342 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.997468 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:28:16 crc kubenswrapper[4858]: E0127 20:28:16.997988 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="init-config-reloader" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.998014 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="init-config-reloader" Jan 27 20:28:16 crc kubenswrapper[4858]: E0127 20:28:16.998029 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="prometheus" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.998037 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="prometheus" Jan 27 20:28:16 crc kubenswrapper[4858]: E0127 20:28:16.998050 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="thanos-sidecar" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.998056 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="thanos-sidecar" Jan 27 20:28:16 crc kubenswrapper[4858]: E0127 20:28:16.998067 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="config-reloader" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.998072 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="config-reloader" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.998247 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="thanos-sidecar" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.998266 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="config-reloader" Jan 27 20:28:16 crc kubenswrapper[4858]: I0127 20:28:16.998277 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" containerName="prometheus" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.000063 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.002234 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.002643 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.004627 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.004638 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.004735 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pdrbd" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.006225 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.006302 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.006671 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.018478 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.032591 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.116775 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.116831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.116862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.116888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.116974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.116996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.117021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.117051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.117079 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.117104 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.117133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm582\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-kube-api-access-bm582\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.117159 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.117183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.219272 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.219337 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.219398 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.220021 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.220060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.220440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.220850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.220913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm582\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-kube-api-access-bm582\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.220943 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.220964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.221014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.221038 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.221066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.221088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.221213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.222089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.225453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.225537 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.225590 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/803e592a1a81ac6dfdcc5cad0d3e656e83a32ab2cebbc52f70d41fc2b9c7180d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.226918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.229018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.229659 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.230635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.231341 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.233913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.239140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.247569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm582\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-kube-api-access-bm582\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.276147 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.320674 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.749853 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:28:17 crc kubenswrapper[4858]: W0127 20:28:17.753910 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfba3a657_b6b7_4fb2_87f6_1e1f25626dd0.slice/crio-bcd9bc9c355a98976b006fa1e705745b4be5427ffd919fc52118a33cb0bf0f65 WatchSource:0}: Error finding container bcd9bc9c355a98976b006fa1e705745b4be5427ffd919fc52118a33cb0bf0f65: Status 404 returned error can't find the container with id bcd9bc9c355a98976b006fa1e705745b4be5427ffd919fc52118a33cb0bf0f65 Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.907593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerStarted","Data":"bcd9bc9c355a98976b006fa1e705745b4be5427ffd919fc52118a33cb0bf0f65"} Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.924496 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-h782w"] Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.925925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h782w" Jan 27 20:28:17 crc kubenswrapper[4858]: I0127 20:28:17.947402 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h782w"] Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.038615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw5tc\" (UniqueName: \"kubernetes.io/projected/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-kube-api-access-tw5tc\") pod \"keystone-db-create-h782w\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " pod="openstack/keystone-db-create-h782w" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.038688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-operator-scripts\") pod \"keystone-db-create-h782w\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " pod="openstack/keystone-db-create-h782w" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.084029 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9e0f34-290c-4297-b65b-2046ea8bd21d" path="/var/lib/kubelet/pods/aa9e0f34-290c-4297-b65b-2046ea8bd21d/volumes" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.140405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw5tc\" (UniqueName: \"kubernetes.io/projected/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-kube-api-access-tw5tc\") pod \"keystone-db-create-h782w\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " pod="openstack/keystone-db-create-h782w" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.140496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-operator-scripts\") pod \"keystone-db-create-h782w\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " pod="openstack/keystone-db-create-h782w" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.141531 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-operator-scripts\") pod \"keystone-db-create-h782w\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " pod="openstack/keystone-db-create-h782w" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.163427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw5tc\" (UniqueName: \"kubernetes.io/projected/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-kube-api-access-tw5tc\") pod \"keystone-db-create-h782w\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " pod="openstack/keystone-db-create-h782w" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.231033 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.246676 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h782w" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.349608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-operator-scripts\") pod \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.349705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmgl\" (UniqueName: \"kubernetes.io/projected/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-kube-api-access-rwmgl\") pod \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\" (UID: \"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432\") " Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.350732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432" (UID: "f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.375782 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-kube-api-access-rwmgl" (OuterVolumeSpecName: "kube-api-access-rwmgl") pod "f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432" (UID: "f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432"). InnerVolumeSpecName "kube-api-access-rwmgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.452696 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.452740 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmgl\" (UniqueName: \"kubernetes.io/projected/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432-kube-api-access-rwmgl\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.736363 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h782w"] Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.917377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h782w" event={"ID":"c7c781eb-de63-45c1-b5b2-0496fe6f2d34","Type":"ContainerStarted","Data":"9feb9717fb2d58a1b03b35dcc2ed9eb769b85028aca69fd7f8accea31c47afcf"} Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.921193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kdm4j" event={"ID":"f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432","Type":"ContainerDied","Data":"4fd21892313b2436f3605fd06e8d5ee3b0d21ece2037f29242fd8e3cc6e66819"} Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.921244 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd21892313b2436f3605fd06e8d5ee3b0d21ece2037f29242fd8e3cc6e66819" Jan 27 20:28:18 crc kubenswrapper[4858]: I0127 20:28:18.921270 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kdm4j" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.133548 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.142040 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vhbc7" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.389330 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jc5cc-config-df2v8"] Jan 27 20:28:19 crc kubenswrapper[4858]: E0127 20:28:19.389882 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432" containerName="mariadb-account-create-update" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.389912 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432" containerName="mariadb-account-create-update" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.390170 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432" containerName="mariadb-account-create-update" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.391014 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.394012 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.401494 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jc5cc-config-df2v8"] Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.476957 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-log-ovn\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.477021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run-ovn\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.477062 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxp7\" (UniqueName: \"kubernetes.io/projected/9709b632-3b79-4965-b00d-c96bf2ab812b-kube-api-access-djxp7\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.477343 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.477441 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-scripts\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.477716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-additional-scripts\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.580475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-log-ovn\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.580525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run-ovn\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.580585 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxp7\" (UniqueName: \"kubernetes.io/projected/9709b632-3b79-4965-b00d-c96bf2ab812b-kube-api-access-djxp7\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.580618 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.580640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-scripts\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.580700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-additional-scripts\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.581243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-log-ovn\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.581332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.581401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run-ovn\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.581969 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-additional-scripts\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.583504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-scripts\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.622937 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxp7\" (UniqueName: \"kubernetes.io/projected/9709b632-3b79-4965-b00d-c96bf2ab812b-kube-api-access-djxp7\") pod \"ovn-controller-jc5cc-config-df2v8\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.708906 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.966090 4858 generic.go:334] "Generic (PLEG): container finished" podID="c7c781eb-de63-45c1-b5b2-0496fe6f2d34" containerID="8b1f356bfce80cd8a433abad4d931289ef5df2cd7837018a90d62ec7b4fe60e2" exitCode=0 Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.966222 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h782w" event={"ID":"c7c781eb-de63-45c1-b5b2-0496fe6f2d34","Type":"ContainerDied","Data":"8b1f356bfce80cd8a433abad4d931289ef5df2cd7837018a90d62ec7b4fe60e2"} Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.989248 4858 generic.go:334] "Generic (PLEG): container finished" podID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerID="a7f9036c77e96dfe20e59ced87acad06df172066fcc8ff6ae5ba1b818cc4ed32" exitCode=0 Jan 27 20:28:19 crc kubenswrapper[4858]: I0127 20:28:19.990131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2","Type":"ContainerDied","Data":"a7f9036c77e96dfe20e59ced87acad06df172066fcc8ff6ae5ba1b818cc4ed32"} Jan 27 20:28:20 crc kubenswrapper[4858]: I0127 20:28:20.464605 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jc5cc-config-df2v8"] Jan 27 20:28:20 crc kubenswrapper[4858]: W0127 20:28:20.465951 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9709b632_3b79_4965_b00d_c96bf2ab812b.slice/crio-aec632cafd68e78a669bbeb80f712660099e39b8b6e460fa7355e5e491410aab WatchSource:0}: Error finding container aec632cafd68e78a669bbeb80f712660099e39b8b6e460fa7355e5e491410aab: Status 404 returned error can't find the container with id aec632cafd68e78a669bbeb80f712660099e39b8b6e460fa7355e5e491410aab Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.000339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerStarted","Data":"c78c0542eef88e01616f455080f4df87d1dfe0c83fe75318e26106a71f1b34cf"} Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.004368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jc5cc-config-df2v8" event={"ID":"9709b632-3b79-4965-b00d-c96bf2ab812b","Type":"ContainerStarted","Data":"22c0361c866286920f89895512ac3ec83272b9a4efc0c82117f0e6d5d8928c5d"} Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.004416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jc5cc-config-df2v8" event={"ID":"9709b632-3b79-4965-b00d-c96bf2ab812b","Type":"ContainerStarted","Data":"aec632cafd68e78a669bbeb80f712660099e39b8b6e460fa7355e5e491410aab"} Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.009776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2","Type":"ContainerStarted","Data":"bcb1d6cdad9834a8ca239bc0e4bd61fa15a8e10dc74226d26466495bc626a052"} Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.010312 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.054861 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371958.799938 podStartE2EDuration="1m18.054836816s" podCreationTimestamp="2026-01-27 20:27:03 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.848715248 +0000 UTC m=+1185.556530954" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:21.054245969 +0000 UTC m=+1245.762061685" watchObservedRunningTime="2026-01-27 20:28:21.054836816 +0000 UTC m=+1245.762652522" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.082993 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jc5cc-config-df2v8" podStartSLOduration=2.082973732 podStartE2EDuration="2.082973732s" podCreationTimestamp="2026-01-27 20:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:21.081576261 +0000 UTC m=+1245.789391967" watchObservedRunningTime="2026-01-27 20:28:21.082973732 +0000 UTC m=+1245.790789438" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.440630 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h782w" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.519345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw5tc\" (UniqueName: \"kubernetes.io/projected/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-kube-api-access-tw5tc\") pod \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.519575 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-operator-scripts\") pod \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\" (UID: \"c7c781eb-de63-45c1-b5b2-0496fe6f2d34\") " Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.520850 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7c781eb-de63-45c1-b5b2-0496fe6f2d34" (UID: "c7c781eb-de63-45c1-b5b2-0496fe6f2d34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.530481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-kube-api-access-tw5tc" (OuterVolumeSpecName: "kube-api-access-tw5tc") pod "c7c781eb-de63-45c1-b5b2-0496fe6f2d34" (UID: "c7c781eb-de63-45c1-b5b2-0496fe6f2d34"). InnerVolumeSpecName "kube-api-access-tw5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.622376 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.622414 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw5tc\" (UniqueName: \"kubernetes.io/projected/c7c781eb-de63-45c1-b5b2-0496fe6f2d34-kube-api-access-tw5tc\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.710756 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kdm4j"] Jan 27 20:28:21 crc kubenswrapper[4858]: I0127 20:28:21.718514 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kdm4j"] Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.019377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h782w" event={"ID":"c7c781eb-de63-45c1-b5b2-0496fe6f2d34","Type":"ContainerDied","Data":"9feb9717fb2d58a1b03b35dcc2ed9eb769b85028aca69fd7f8accea31c47afcf"} Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.019436 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9feb9717fb2d58a1b03b35dcc2ed9eb769b85028aca69fd7f8accea31c47afcf" Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.019390 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h782w" Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.021616 4858 generic.go:334] "Generic (PLEG): container finished" podID="6c539609-6c9e-46bc-a0d7-6a629e83ce17" containerID="9159361669f202d325e9ed3fa878b087c229e8652e66a74175e9b39b3e2e5fb6" exitCode=0 Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.021712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c539609-6c9e-46bc-a0d7-6a629e83ce17","Type":"ContainerDied","Data":"9159361669f202d325e9ed3fa878b087c229e8652e66a74175e9b39b3e2e5fb6"} Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.023768 4858 generic.go:334] "Generic (PLEG): container finished" podID="9709b632-3b79-4965-b00d-c96bf2ab812b" containerID="22c0361c866286920f89895512ac3ec83272b9a4efc0c82117f0e6d5d8928c5d" exitCode=0 Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.023825 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jc5cc-config-df2v8" event={"ID":"9709b632-3b79-4965-b00d-c96bf2ab812b","Type":"ContainerDied","Data":"22c0361c866286920f89895512ac3ec83272b9a4efc0c82117f0e6d5d8928c5d"} Jan 27 20:28:22 crc kubenswrapper[4858]: I0127 20:28:22.104267 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432" path="/var/lib/kubelet/pods/f83b2ba0-4a86-4f26-8ee8-88ff3bcd3432/volumes" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.036803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c539609-6c9e-46bc-a0d7-6a629e83ce17","Type":"ContainerStarted","Data":"88a891049c993f3701a1116b1e8bb1fbfd5f954ba6cae5d145f9c47053e823a6"} Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.038960 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.371185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.380030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/177247c1-763d-4d0c-81ba-f538937f0008-etc-swift\") pod \"swift-storage-0\" (UID: \"177247c1-763d-4d0c-81ba-f538937f0008\") " pod="openstack/swift-storage-0" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.583006 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.677206 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.712212 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=-9223371957.14259 podStartE2EDuration="1m19.712185499s" podCreationTimestamp="2026-01-27 20:27:04 +0000 UTC" firstStartedPulling="2026-01-27 20:27:20.831903154 +0000 UTC m=+1185.539718860" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:23.090933705 +0000 UTC m=+1247.798749431" watchObservedRunningTime="2026-01-27 20:28:23.712185499 +0000 UTC m=+1248.420001205" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.778255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run\") pod \"9709b632-3b79-4965-b00d-c96bf2ab812b\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.778327 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxp7\" (UniqueName: \"kubernetes.io/projected/9709b632-3b79-4965-b00d-c96bf2ab812b-kube-api-access-djxp7\") pod \"9709b632-3b79-4965-b00d-c96bf2ab812b\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.778392 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-scripts\") pod \"9709b632-3b79-4965-b00d-c96bf2ab812b\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.778417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run" (OuterVolumeSpecName: "var-run") pod "9709b632-3b79-4965-b00d-c96bf2ab812b" (UID: "9709b632-3b79-4965-b00d-c96bf2ab812b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.778533 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-additional-scripts\") pod \"9709b632-3b79-4965-b00d-c96bf2ab812b\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.778609 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-log-ovn\") pod \"9709b632-3b79-4965-b00d-c96bf2ab812b\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.778628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run-ovn\") pod \"9709b632-3b79-4965-b00d-c96bf2ab812b\" (UID: \"9709b632-3b79-4965-b00d-c96bf2ab812b\") " Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.779061 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.779124 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9709b632-3b79-4965-b00d-c96bf2ab812b" (UID: "9709b632-3b79-4965-b00d-c96bf2ab812b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.779439 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9709b632-3b79-4965-b00d-c96bf2ab812b" (UID: "9709b632-3b79-4965-b00d-c96bf2ab812b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.779800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9709b632-3b79-4965-b00d-c96bf2ab812b" (UID: "9709b632-3b79-4965-b00d-c96bf2ab812b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.779863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-scripts" (OuterVolumeSpecName: "scripts") pod "9709b632-3b79-4965-b00d-c96bf2ab812b" (UID: "9709b632-3b79-4965-b00d-c96bf2ab812b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.784845 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9709b632-3b79-4965-b00d-c96bf2ab812b-kube-api-access-djxp7" (OuterVolumeSpecName: "kube-api-access-djxp7") pod "9709b632-3b79-4965-b00d-c96bf2ab812b" (UID: "9709b632-3b79-4965-b00d-c96bf2ab812b"). InnerVolumeSpecName "kube-api-access-djxp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.881519 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxp7\" (UniqueName: \"kubernetes.io/projected/9709b632-3b79-4965-b00d-c96bf2ab812b-kube-api-access-djxp7\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.881965 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.881978 4858 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9709b632-3b79-4965-b00d-c96bf2ab812b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.881990 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:23 crc kubenswrapper[4858]: I0127 20:28:23.882003 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9709b632-3b79-4965-b00d-c96bf2ab812b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:24 crc kubenswrapper[4858]: I0127 20:28:24.048359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jc5cc-config-df2v8" event={"ID":"9709b632-3b79-4965-b00d-c96bf2ab812b","Type":"ContainerDied","Data":"aec632cafd68e78a669bbeb80f712660099e39b8b6e460fa7355e5e491410aab"} Jan 27 20:28:24 crc kubenswrapper[4858]: I0127 20:28:24.048434 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aec632cafd68e78a669bbeb80f712660099e39b8b6e460fa7355e5e491410aab" Jan 27 20:28:24 crc kubenswrapper[4858]: I0127 20:28:24.048468 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jc5cc-config-df2v8" Jan 27 20:28:24 crc kubenswrapper[4858]: I0127 20:28:24.111191 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jc5cc" Jan 27 20:28:24 crc kubenswrapper[4858]: I0127 20:28:24.252765 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 20:28:24 crc kubenswrapper[4858]: W0127 20:28:24.283779 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod177247c1_763d_4d0c_81ba_f538937f0008.slice/crio-86db8514b1e851a02a0b8af8eb01a9c3c5eafddb744c92aaf076e80238c0a1a5 WatchSource:0}: Error finding container 86db8514b1e851a02a0b8af8eb01a9c3c5eafddb744c92aaf076e80238c0a1a5: Status 404 returned error can't find the container with id 86db8514b1e851a02a0b8af8eb01a9c3c5eafddb744c92aaf076e80238c0a1a5 Jan 27 20:28:24 crc kubenswrapper[4858]: I0127 20:28:24.804442 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jc5cc-config-df2v8"] Jan 27 20:28:24 crc kubenswrapper[4858]: I0127 20:28:24.813267 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jc5cc-config-df2v8"] Jan 27 20:28:25 crc kubenswrapper[4858]: I0127 20:28:25.079233 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"86db8514b1e851a02a0b8af8eb01a9c3c5eafddb744c92aaf076e80238c0a1a5"} Jan 27 20:28:25 crc kubenswrapper[4858]: I0127 20:28:25.112917 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ad881410-229a-4427-862b-8febd0e5ab61" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.095570 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9709b632-3b79-4965-b00d-c96bf2ab812b" path="/var/lib/kubelet/pods/9709b632-3b79-4965-b00d-c96bf2ab812b/volumes" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.129229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"0a30ebd4dd77c111e8dfbe97f5db279c2be5f07e6facbb474c62e109ed0d6f2c"} Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.129281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"5a5ceba00a8456a3d9e64bcaff95df42594edfae8a13903738360f50840448ac"} Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.129293 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"f841e1a0b7cc0982c2b2c67b496333dfba6007dceafb3ec95ba5a2acdc7fc84a"} Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.129303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"18b32aec946e471f4fa6e51c7a8a023f2b764268aa77c28de7d6d8b10a357863"} Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.819008 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gxl97"] Jan 27 20:28:26 crc kubenswrapper[4858]: E0127 20:28:26.819415 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c781eb-de63-45c1-b5b2-0496fe6f2d34" containerName="mariadb-database-create" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.819433 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c781eb-de63-45c1-b5b2-0496fe6f2d34" containerName="mariadb-database-create" Jan 27 20:28:26 crc kubenswrapper[4858]: E0127 20:28:26.819469 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9709b632-3b79-4965-b00d-c96bf2ab812b" containerName="ovn-config" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.819475 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9709b632-3b79-4965-b00d-c96bf2ab812b" containerName="ovn-config" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.819657 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9709b632-3b79-4965-b00d-c96bf2ab812b" containerName="ovn-config" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.819681 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c781eb-de63-45c1-b5b2-0496fe6f2d34" containerName="mariadb-database-create" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.820346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.827434 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.833187 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gxl97"] Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.844855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102cbac7-d601-47cf-b5d5-7279a3453669-operator-scripts\") pod \"root-account-create-update-gxl97\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.844916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wpsv\" (UniqueName: \"kubernetes.io/projected/102cbac7-d601-47cf-b5d5-7279a3453669-kube-api-access-2wpsv\") pod \"root-account-create-update-gxl97\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.947296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wpsv\" (UniqueName: \"kubernetes.io/projected/102cbac7-d601-47cf-b5d5-7279a3453669-kube-api-access-2wpsv\") pod \"root-account-create-update-gxl97\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.947938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102cbac7-d601-47cf-b5d5-7279a3453669-operator-scripts\") pod \"root-account-create-update-gxl97\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.948657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102cbac7-d601-47cf-b5d5-7279a3453669-operator-scripts\") pod \"root-account-create-update-gxl97\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:26 crc kubenswrapper[4858]: I0127 20:28:26.973621 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wpsv\" (UniqueName: \"kubernetes.io/projected/102cbac7-d601-47cf-b5d5-7279a3453669-kube-api-access-2wpsv\") pod \"root-account-create-update-gxl97\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:27 crc kubenswrapper[4858]: I0127 20:28:27.142706 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"63287f57e87b6ec6a8f2ba8205308c0c9fd534c3a66406c6cf12123a8529b93c"} Jan 27 20:28:27 crc kubenswrapper[4858]: I0127 20:28:27.218744 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:27 crc kubenswrapper[4858]: I0127 20:28:27.778324 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gxl97"] Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.155983 4858 generic.go:334] "Generic (PLEG): container finished" podID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerID="c78c0542eef88e01616f455080f4df87d1dfe0c83fe75318e26106a71f1b34cf" exitCode=0 Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.156081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerDied","Data":"c78c0542eef88e01616f455080f4df87d1dfe0c83fe75318e26106a71f1b34cf"} Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.170428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gxl97" event={"ID":"102cbac7-d601-47cf-b5d5-7279a3453669","Type":"ContainerStarted","Data":"2af5bd9f6018c3bdefdcd11a1fc5f5292aa9740fc308ac54e82ef39926abff28"} Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.170495 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gxl97" event={"ID":"102cbac7-d601-47cf-b5d5-7279a3453669","Type":"ContainerStarted","Data":"e32f38b298ad070e7b00d52b72c32d3d983a093cd2cdeb812d48ad14ada75c70"} Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.192857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"32a88d009943e791dc53b7307dc428164239228de9d5a81a08175c5532287a32"} Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.192918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"095a288aaec63cf286a04d3a973b4eed42af746000a700a881417e960144ceaa"} Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.192933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"2eeb09e74fbb091e7b86837200c31bc52fdfa4abe5d215b2f4cef87830032cbd"} Jan 27 20:28:28 crc kubenswrapper[4858]: I0127 20:28:28.244436 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-gxl97" podStartSLOduration=2.244400825 podStartE2EDuration="2.244400825s" podCreationTimestamp="2026-01-27 20:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:28.228281077 +0000 UTC m=+1252.936096813" watchObservedRunningTime="2026-01-27 20:28:28.244400825 +0000 UTC m=+1252.952216531" Jan 27 20:28:29 crc kubenswrapper[4858]: I0127 20:28:29.219187 4858 generic.go:334] "Generic (PLEG): container finished" podID="102cbac7-d601-47cf-b5d5-7279a3453669" containerID="2af5bd9f6018c3bdefdcd11a1fc5f5292aa9740fc308ac54e82ef39926abff28" exitCode=0 Jan 27 20:28:29 crc kubenswrapper[4858]: I0127 20:28:29.220083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gxl97" event={"ID":"102cbac7-d601-47cf-b5d5-7279a3453669","Type":"ContainerDied","Data":"2af5bd9f6018c3bdefdcd11a1fc5f5292aa9740fc308ac54e82ef39926abff28"} Jan 27 20:28:29 crc kubenswrapper[4858]: I0127 20:28:29.232109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"71c938764d5495805e14bab2e975557adcb93d143e2b895cba6167c845b08b6a"} Jan 27 20:28:29 crc kubenswrapper[4858]: I0127 20:28:29.232188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"4f4f1461c86f0bb646725aff620b4c831f32c1e8be96603bd9ccbfaefbc811f2"} Jan 27 20:28:29 crc kubenswrapper[4858]: I0127 20:28:29.236071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerStarted","Data":"c8d2007c95307e3fd8a2616de50cfb7b72c811519dd56bba633427b9a89f46fb"} Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.251113 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"ab7b86a726a5b3a1a953da55857a5a86fbf58d737d64baf52ac3dba414b1b6ea"} Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.251427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"293d09d9ae7332a346426da635a3238333bb51d606014d1725e0ee5808c61218"} Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.251438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"b7dc26d746510f3616169ad4746abd1710574a369a8c289258b83edc3d2e57b0"} Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.670139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.730134 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.836142 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102cbac7-d601-47cf-b5d5-7279a3453669-operator-scripts\") pod \"102cbac7-d601-47cf-b5d5-7279a3453669\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.836376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wpsv\" (UniqueName: \"kubernetes.io/projected/102cbac7-d601-47cf-b5d5-7279a3453669-kube-api-access-2wpsv\") pod \"102cbac7-d601-47cf-b5d5-7279a3453669\" (UID: \"102cbac7-d601-47cf-b5d5-7279a3453669\") " Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.836730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102cbac7-d601-47cf-b5d5-7279a3453669-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "102cbac7-d601-47cf-b5d5-7279a3453669" (UID: "102cbac7-d601-47cf-b5d5-7279a3453669"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.843075 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102cbac7-d601-47cf-b5d5-7279a3453669-kube-api-access-2wpsv" (OuterVolumeSpecName: "kube-api-access-2wpsv") pod "102cbac7-d601-47cf-b5d5-7279a3453669" (UID: "102cbac7-d601-47cf-b5d5-7279a3453669"). InnerVolumeSpecName "kube-api-access-2wpsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.938620 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wpsv\" (UniqueName: \"kubernetes.io/projected/102cbac7-d601-47cf-b5d5-7279a3453669-kube-api-access-2wpsv\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:30 crc kubenswrapper[4858]: I0127 20:28:30.938672 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102cbac7-d601-47cf-b5d5-7279a3453669-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.262708 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gxl97" event={"ID":"102cbac7-d601-47cf-b5d5-7279a3453669","Type":"ContainerDied","Data":"e32f38b298ad070e7b00d52b72c32d3d983a093cd2cdeb812d48ad14ada75c70"} Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.263195 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32f38b298ad070e7b00d52b72c32d3d983a093cd2cdeb812d48ad14ada75c70" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.263295 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gxl97" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.282205 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"6dc64acd5fa6d8ae8fff08f977224e7de1c3c9c87bd50ffe7f4383b3ffcb8eac"} Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.282274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"177247c1-763d-4d0c-81ba-f538937f0008","Type":"ContainerStarted","Data":"1404588099d3506de4f63a15eb05b3c92208c92f165cc53e6093a44b5f09a265"} Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.347338 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.989630433 podStartE2EDuration="41.347305968s" podCreationTimestamp="2026-01-27 20:27:50 +0000 UTC" firstStartedPulling="2026-01-27 20:28:24.287923943 +0000 UTC m=+1248.995739649" lastFinishedPulling="2026-01-27 20:28:28.645599478 +0000 UTC m=+1253.353415184" observedRunningTime="2026-01-27 20:28:31.34116781 +0000 UTC m=+1256.048983556" watchObservedRunningTime="2026-01-27 20:28:31.347305968 +0000 UTC m=+1256.055121694" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.693183 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b485d48dc-srxmr"] Jan 27 20:28:31 crc kubenswrapper[4858]: E0127 20:28:31.693606 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102cbac7-d601-47cf-b5d5-7279a3453669" containerName="mariadb-account-create-update" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.693626 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="102cbac7-d601-47cf-b5d5-7279a3453669" containerName="mariadb-account-create-update" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.693798 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="102cbac7-d601-47cf-b5d5-7279a3453669" containerName="mariadb-account-create-update" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.698132 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.701941 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.719937 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b485d48dc-srxmr"] Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.758730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.758869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.759016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jzsj\" (UniqueName: \"kubernetes.io/projected/38c884ca-127a-4e48-a05a-bd1834beb22b-kube-api-access-7jzsj\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.759154 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-config\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.759263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.760343 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-svc\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.867364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.867458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.867490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jzsj\" (UniqueName: \"kubernetes.io/projected/38c884ca-127a-4e48-a05a-bd1834beb22b-kube-api-access-7jzsj\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.867511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-config\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.867529 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.867611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-svc\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.868577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.868592 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-svc\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.868628 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.869062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-config\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.869477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:31 crc kubenswrapper[4858]: I0127 20:28:31.892413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jzsj\" (UniqueName: \"kubernetes.io/projected/38c884ca-127a-4e48-a05a-bd1834beb22b-kube-api-access-7jzsj\") pod \"dnsmasq-dns-6b485d48dc-srxmr\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.060408 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.306415 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerStarted","Data":"f006c7feb5b087fe7ceeb39b589be407ce3441aae0d68fdf52f54cd807f4a6d6"} Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.306966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerStarted","Data":"d8a7562fab9559e8b2a4896ed4b5bf668ee2be4e74b7f27fa4fa650ace0bbf57"} Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.321498 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.321591 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.336422 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.361347 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.361314889 podStartE2EDuration="16.361314889s" podCreationTimestamp="2026-01-27 20:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:32.349654541 +0000 UTC m=+1257.057470247" watchObservedRunningTime="2026-01-27 20:28:32.361314889 +0000 UTC m=+1257.069130595" Jan 27 20:28:32 crc kubenswrapper[4858]: I0127 20:28:32.440431 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b485d48dc-srxmr"] Jan 27 20:28:33 crc kubenswrapper[4858]: I0127 20:28:33.314943 4858 generic.go:334] "Generic (PLEG): container finished" podID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerID="e486f805a42fa728f387e0a3665b5b02c4d75eb1b3d531ddc088f9d1db72f8f9" exitCode=0 Jan 27 20:28:33 crc kubenswrapper[4858]: I0127 20:28:33.315007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" event={"ID":"38c884ca-127a-4e48-a05a-bd1834beb22b","Type":"ContainerDied","Data":"e486f805a42fa728f387e0a3665b5b02c4d75eb1b3d531ddc088f9d1db72f8f9"} Jan 27 20:28:33 crc kubenswrapper[4858]: I0127 20:28:33.317537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" event={"ID":"38c884ca-127a-4e48-a05a-bd1834beb22b","Type":"ContainerStarted","Data":"4ee80fc1ba41f3e5c054119b17dc67fdbe597e2285dcdcd671cf9e99ca3b393e"} Jan 27 20:28:33 crc kubenswrapper[4858]: I0127 20:28:33.326120 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 20:28:34 crc kubenswrapper[4858]: I0127 20:28:34.342810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" event={"ID":"38c884ca-127a-4e48-a05a-bd1834beb22b","Type":"ContainerStarted","Data":"828cd25448002f0411534faf9fb23020ccdc3c631c333c25285470b71670395b"} Jan 27 20:28:34 crc kubenswrapper[4858]: I0127 20:28:34.349863 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:34 crc kubenswrapper[4858]: I0127 20:28:34.373040 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" podStartSLOduration=3.373017441 podStartE2EDuration="3.373017441s" podCreationTimestamp="2026-01-27 20:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:34.365814862 +0000 UTC m=+1259.073630588" watchObservedRunningTime="2026-01-27 20:28:34.373017441 +0000 UTC m=+1259.080833147" Jan 27 20:28:35 crc kubenswrapper[4858]: I0127 20:28:35.058259 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 27 20:28:35 crc kubenswrapper[4858]: I0127 20:28:35.111916 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:28:35 crc kubenswrapper[4858]: I0127 20:28:35.441172 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="6c539609-6c9e-46bc-a0d7-6a629e83ce17" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.062909 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.146897 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566979bdbf-kg5cb"] Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.147210 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" podUID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerName="dnsmasq-dns" containerID="cri-o://141cfd907442ce40bae896f1310659ad45760b2c589d9a925d18147b8d550ee4" gracePeriod=10 Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.422005 4858 generic.go:334] "Generic (PLEG): container finished" podID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerID="141cfd907442ce40bae896f1310659ad45760b2c589d9a925d18147b8d550ee4" exitCode=0 Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.422620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" event={"ID":"a60150fc-ab94-47ae-b1e8-247c240f0995","Type":"ContainerDied","Data":"141cfd907442ce40bae896f1310659ad45760b2c589d9a925d18147b8d550ee4"} Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.625858 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.782493 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf6j5\" (UniqueName: \"kubernetes.io/projected/a60150fc-ab94-47ae-b1e8-247c240f0995-kube-api-access-xf6j5\") pod \"a60150fc-ab94-47ae-b1e8-247c240f0995\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.782574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-sb\") pod \"a60150fc-ab94-47ae-b1e8-247c240f0995\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.782687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-dns-svc\") pod \"a60150fc-ab94-47ae-b1e8-247c240f0995\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.782729 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-config\") pod \"a60150fc-ab94-47ae-b1e8-247c240f0995\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.782747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-nb\") pod \"a60150fc-ab94-47ae-b1e8-247c240f0995\" (UID: \"a60150fc-ab94-47ae-b1e8-247c240f0995\") " Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.794227 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60150fc-ab94-47ae-b1e8-247c240f0995-kube-api-access-xf6j5" (OuterVolumeSpecName: "kube-api-access-xf6j5") pod "a60150fc-ab94-47ae-b1e8-247c240f0995" (UID: "a60150fc-ab94-47ae-b1e8-247c240f0995"). InnerVolumeSpecName "kube-api-access-xf6j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.837914 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a60150fc-ab94-47ae-b1e8-247c240f0995" (UID: "a60150fc-ab94-47ae-b1e8-247c240f0995"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.838007 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-config" (OuterVolumeSpecName: "config") pod "a60150fc-ab94-47ae-b1e8-247c240f0995" (UID: "a60150fc-ab94-47ae-b1e8-247c240f0995"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.860139 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a60150fc-ab94-47ae-b1e8-247c240f0995" (UID: "a60150fc-ab94-47ae-b1e8-247c240f0995"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.865607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a60150fc-ab94-47ae-b1e8-247c240f0995" (UID: "a60150fc-ab94-47ae-b1e8-247c240f0995"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.885744 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.885781 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.885797 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf6j5\" (UniqueName: \"kubernetes.io/projected/a60150fc-ab94-47ae-b1e8-247c240f0995-kube-api-access-xf6j5\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.885811 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:42 crc kubenswrapper[4858]: I0127 20:28:42.885821 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a60150fc-ab94-47ae-b1e8-247c240f0995-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:43 crc kubenswrapper[4858]: I0127 20:28:43.432924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" event={"ID":"a60150fc-ab94-47ae-b1e8-247c240f0995","Type":"ContainerDied","Data":"771f3b1264cbb5c3416a3d86ea6ab851a1b0901185e5c9ac6c31ddf115f25836"} Jan 27 20:28:43 crc kubenswrapper[4858]: I0127 20:28:43.433010 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566979bdbf-kg5cb" Jan 27 20:28:43 crc kubenswrapper[4858]: I0127 20:28:43.433476 4858 scope.go:117] "RemoveContainer" containerID="141cfd907442ce40bae896f1310659ad45760b2c589d9a925d18147b8d550ee4" Jan 27 20:28:43 crc kubenswrapper[4858]: I0127 20:28:43.463680 4858 scope.go:117] "RemoveContainer" containerID="0340505a9d27a8f05c02d5caa376e25c316e5cf186aaac16586595b145468313" Jan 27 20:28:43 crc kubenswrapper[4858]: I0127 20:28:43.477886 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566979bdbf-kg5cb"] Jan 27 20:28:43 crc kubenswrapper[4858]: I0127 20:28:43.582832 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566979bdbf-kg5cb"] Jan 27 20:28:44 crc kubenswrapper[4858]: I0127 20:28:44.098517 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60150fc-ab94-47ae-b1e8-247c240f0995" path="/var/lib/kubelet/pods/a60150fc-ab94-47ae-b1e8-247c240f0995/volumes" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.056884 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.436035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.450829 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hh25x"] Jan 27 20:28:45 crc kubenswrapper[4858]: E0127 20:28:45.451243 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerName="init" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.451268 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerName="init" Jan 27 20:28:45 crc kubenswrapper[4858]: E0127 20:28:45.451287 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerName="dnsmasq-dns" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.451298 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerName="dnsmasq-dns" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.451660 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60150fc-ab94-47ae-b1e8-247c240f0995" containerName="dnsmasq-dns" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.452567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.470400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hh25x"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.588182 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ch6lv"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.589495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.601885 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cc4d-account-create-update-d2jtl"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.613226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.616521 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.636509 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cc4d-account-create-update-d2jtl"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.640455 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d33fccf1-31e1-48da-9f19-598e521a8357-operator-scripts\") pod \"barbican-db-create-hh25x\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.640580 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dtkh\" (UniqueName: \"kubernetes.io/projected/d33fccf1-31e1-48da-9f19-598e521a8357-kube-api-access-9dtkh\") pod \"barbican-db-create-hh25x\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.653497 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ch6lv"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.679796 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-364b-account-create-update-csrfb"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.689747 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.696899 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.710782 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-364b-account-create-update-csrfb"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.745712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e7fc25-3533-452e-b1cd-1bb63cf92b60-operator-scripts\") pod \"cinder-cc4d-account-create-update-d2jtl\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.745777 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7tl\" (UniqueName: \"kubernetes.io/projected/3dc3a612-b94e-44d8-a37f-7788316b9156-kube-api-access-6z7tl\") pod \"cinder-db-create-ch6lv\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.746031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d33fccf1-31e1-48da-9f19-598e521a8357-operator-scripts\") pod \"barbican-db-create-hh25x\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.746081 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dtkh\" (UniqueName: \"kubernetes.io/projected/d33fccf1-31e1-48da-9f19-598e521a8357-kube-api-access-9dtkh\") pod \"barbican-db-create-hh25x\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.746098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc8m4\" (UniqueName: \"kubernetes.io/projected/06e7fc25-3533-452e-b1cd-1bb63cf92b60-kube-api-access-qc8m4\") pod \"cinder-cc4d-account-create-update-d2jtl\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.746114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc3a612-b94e-44d8-a37f-7788316b9156-operator-scripts\") pod \"cinder-db-create-ch6lv\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.747168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d33fccf1-31e1-48da-9f19-598e521a8357-operator-scripts\") pod \"barbican-db-create-hh25x\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.771789 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dtkh\" (UniqueName: \"kubernetes.io/projected/d33fccf1-31e1-48da-9f19-598e521a8357-kube-api-access-9dtkh\") pod \"barbican-db-create-hh25x\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.777935 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.870303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab95708-4d18-4482-be83-b9f184b8a8f0-operator-scripts\") pod \"barbican-364b-account-create-update-csrfb\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.870457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z59l\" (UniqueName: \"kubernetes.io/projected/3ab95708-4d18-4482-be83-b9f184b8a8f0-kube-api-access-7z59l\") pod \"barbican-364b-account-create-update-csrfb\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.870638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc8m4\" (UniqueName: \"kubernetes.io/projected/06e7fc25-3533-452e-b1cd-1bb63cf92b60-kube-api-access-qc8m4\") pod \"cinder-cc4d-account-create-update-d2jtl\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.870665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc3a612-b94e-44d8-a37f-7788316b9156-operator-scripts\") pod \"cinder-db-create-ch6lv\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.870996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e7fc25-3533-452e-b1cd-1bb63cf92b60-operator-scripts\") pod \"cinder-cc4d-account-create-update-d2jtl\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.871053 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z7tl\" (UniqueName: \"kubernetes.io/projected/3dc3a612-b94e-44d8-a37f-7788316b9156-kube-api-access-6z7tl\") pod \"cinder-db-create-ch6lv\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.871580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc3a612-b94e-44d8-a37f-7788316b9156-operator-scripts\") pod \"cinder-db-create-ch6lv\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.871885 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e7fc25-3533-452e-b1cd-1bb63cf92b60-operator-scripts\") pod \"cinder-cc4d-account-create-update-d2jtl\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.877110 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5fzmw"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.878447 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.883190 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.883413 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.883804 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.888247 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4vbp" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.892021 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5fzmw"] Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.903256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z7tl\" (UniqueName: \"kubernetes.io/projected/3dc3a612-b94e-44d8-a37f-7788316b9156-kube-api-access-6z7tl\") pod \"cinder-db-create-ch6lv\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.909984 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.915198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc8m4\" (UniqueName: \"kubernetes.io/projected/06e7fc25-3533-452e-b1cd-1bb63cf92b60-kube-api-access-qc8m4\") pod \"cinder-cc4d-account-create-update-d2jtl\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.932930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.974773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-config-data\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.974829 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsds\" (UniqueName: \"kubernetes.io/projected/0d3e3875-21c2-42e2-ba9b-ed981baab427-kube-api-access-hmsds\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.974876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab95708-4d18-4482-be83-b9f184b8a8f0-operator-scripts\") pod \"barbican-364b-account-create-update-csrfb\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.974937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z59l\" (UniqueName: \"kubernetes.io/projected/3ab95708-4d18-4482-be83-b9f184b8a8f0-kube-api-access-7z59l\") pod \"barbican-364b-account-create-update-csrfb\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.975000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-combined-ca-bundle\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.976039 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab95708-4d18-4482-be83-b9f184b8a8f0-operator-scripts\") pod \"barbican-364b-account-create-update-csrfb\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:45 crc kubenswrapper[4858]: I0127 20:28:45.994750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z59l\" (UniqueName: \"kubernetes.io/projected/3ab95708-4d18-4482-be83-b9f184b8a8f0-kube-api-access-7z59l\") pod \"barbican-364b-account-create-update-csrfb\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.020239 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.078777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-config-data\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.079251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmsds\" (UniqueName: \"kubernetes.io/projected/0d3e3875-21c2-42e2-ba9b-ed981baab427-kube-api-access-hmsds\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.080899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-combined-ca-bundle\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.100926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-config-data\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.100934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmsds\" (UniqueName: \"kubernetes.io/projected/0d3e3875-21c2-42e2-ba9b-ed981baab427-kube-api-access-hmsds\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.103585 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-combined-ca-bundle\") pod \"keystone-db-sync-5fzmw\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.356584 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.460995 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-364b-account-create-update-csrfb"] Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.528842 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cc4d-account-create-update-d2jtl"] Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.538571 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hh25x"] Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.546780 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ch6lv"] Jan 27 20:28:46 crc kubenswrapper[4858]: I0127 20:28:46.896887 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5fzmw"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.284914 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-nwnrz"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.286763 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.289168 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-gt8qw" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.291403 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.309977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-config-data\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.310149 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4qb\" (UniqueName: \"kubernetes.io/projected/7e49e861-a431-4a8f-8864-9672d699d9a0-kube-api-access-tv4qb\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.310221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-db-sync-config-data\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.310323 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-combined-ca-bundle\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.320536 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-nwnrz"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.347622 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gjm29"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.348891 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.375899 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gjm29"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.411819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-db-sync-config-data\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.411904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhhx\" (UniqueName: \"kubernetes.io/projected/f9a9d6ee-6747-442f-a1d8-222920bef47e-kube-api-access-jrhhx\") pod \"glance-db-create-gjm29\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.411937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-combined-ca-bundle\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.411989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-config-data\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.412035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv4qb\" (UniqueName: \"kubernetes.io/projected/7e49e861-a431-4a8f-8864-9672d699d9a0-kube-api-access-tv4qb\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.412062 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a9d6ee-6747-442f-a1d8-222920bef47e-operator-scripts\") pod \"glance-db-create-gjm29\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.424207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-config-data\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.425360 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-374b-account-create-update-nmdgk"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.426301 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-combined-ca-bundle\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.428210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-db-sync-config-data\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.433601 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.441364 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-374b-account-create-update-nmdgk"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.448241 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.455258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv4qb\" (UniqueName: \"kubernetes.io/projected/7e49e861-a431-4a8f-8864-9672d699d9a0-kube-api-access-tv4qb\") pod \"watcher-db-sync-nwnrz\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.494533 4858 generic.go:334] "Generic (PLEG): container finished" podID="3ab95708-4d18-4482-be83-b9f184b8a8f0" containerID="4d70e56c3d3a4cdcba438c5451d8e7c7f7192bb4f10bcfa81fec4e27c20095f0" exitCode=0 Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.494705 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-364b-account-create-update-csrfb" event={"ID":"3ab95708-4d18-4482-be83-b9f184b8a8f0","Type":"ContainerDied","Data":"4d70e56c3d3a4cdcba438c5451d8e7c7f7192bb4f10bcfa81fec4e27c20095f0"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.494742 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-364b-account-create-update-csrfb" event={"ID":"3ab95708-4d18-4482-be83-b9f184b8a8f0","Type":"ContainerStarted","Data":"8028882f3c75c7d400211f9c7e84a7a2481415d3ee8a808aebe3d5834e25967c"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.496684 4858 generic.go:334] "Generic (PLEG): container finished" podID="d33fccf1-31e1-48da-9f19-598e521a8357" containerID="95ebc05229889acd0760c4cccbee16f1f7b7e72f5e52f81857016afb0b2d1b7c" exitCode=0 Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.496751 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hh25x" event={"ID":"d33fccf1-31e1-48da-9f19-598e521a8357","Type":"ContainerDied","Data":"95ebc05229889acd0760c4cccbee16f1f7b7e72f5e52f81857016afb0b2d1b7c"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.496782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hh25x" event={"ID":"d33fccf1-31e1-48da-9f19-598e521a8357","Type":"ContainerStarted","Data":"38e5415c54ae8a1726efb7ede6c264ce0443dcd9182336c61863da2eebc974ba"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.498521 4858 generic.go:334] "Generic (PLEG): container finished" podID="3dc3a612-b94e-44d8-a37f-7788316b9156" containerID="caf128d862a6e275cd3e353b499ea58b115a8a5f6c85813ea7d4c1f407e1a290" exitCode=0 Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.498667 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch6lv" event={"ID":"3dc3a612-b94e-44d8-a37f-7788316b9156","Type":"ContainerDied","Data":"caf128d862a6e275cd3e353b499ea58b115a8a5f6c85813ea7d4c1f407e1a290"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.498711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch6lv" event={"ID":"3dc3a612-b94e-44d8-a37f-7788316b9156","Type":"ContainerStarted","Data":"f7082878f27a5fc9e7c50cc0630f7e01012deb0685c062e41f6538c8a13aab82"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.504234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fzmw" event={"ID":"0d3e3875-21c2-42e2-ba9b-ed981baab427","Type":"ContainerStarted","Data":"debda5dd84e6a531202a260e333a4bb47e7301cf026556235030a102bc712571"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.506680 4858 generic.go:334] "Generic (PLEG): container finished" podID="06e7fc25-3533-452e-b1cd-1bb63cf92b60" containerID="e8f9b65ca82a4810c2f4128d4a5385a9ab7c32287bf42bcd935f06eeab4aa458" exitCode=0 Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.506829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cc4d-account-create-update-d2jtl" event={"ID":"06e7fc25-3533-452e-b1cd-1bb63cf92b60","Type":"ContainerDied","Data":"e8f9b65ca82a4810c2f4128d4a5385a9ab7c32287bf42bcd935f06eeab4aa458"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.506967 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cc4d-account-create-update-d2jtl" event={"ID":"06e7fc25-3533-452e-b1cd-1bb63cf92b60","Type":"ContainerStarted","Data":"b79faf9816ab27e5b55c575ab47c78a20e7dc6419608bc68bbb65bc6366ec9b2"} Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.514481 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a9d6ee-6747-442f-a1d8-222920bef47e-operator-scripts\") pod \"glance-db-create-gjm29\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.514785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2596f4b-7e20-4979-be17-e263bc5949c0-operator-scripts\") pod \"glance-374b-account-create-update-nmdgk\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.514911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctz7m\" (UniqueName: \"kubernetes.io/projected/b2596f4b-7e20-4979-be17-e263bc5949c0-kube-api-access-ctz7m\") pod \"glance-374b-account-create-update-nmdgk\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.515029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhhx\" (UniqueName: \"kubernetes.io/projected/f9a9d6ee-6747-442f-a1d8-222920bef47e-kube-api-access-jrhhx\") pod \"glance-db-create-gjm29\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.516427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a9d6ee-6747-442f-a1d8-222920bef47e-operator-scripts\") pod \"glance-db-create-gjm29\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.535953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhhx\" (UniqueName: \"kubernetes.io/projected/f9a9d6ee-6747-442f-a1d8-222920bef47e-kube-api-access-jrhhx\") pod \"glance-db-create-gjm29\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.617827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctz7m\" (UniqueName: \"kubernetes.io/projected/b2596f4b-7e20-4979-be17-e263bc5949c0-kube-api-access-ctz7m\") pod \"glance-374b-account-create-update-nmdgk\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.618327 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2596f4b-7e20-4979-be17-e263bc5949c0-operator-scripts\") pod \"glance-374b-account-create-update-nmdgk\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.619163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2596f4b-7e20-4979-be17-e263bc5949c0-operator-scripts\") pod \"glance-374b-account-create-update-nmdgk\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.624775 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2a45-account-create-update-cv72r"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.626461 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.633429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.643893 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctz7m\" (UniqueName: \"kubernetes.io/projected/b2596f4b-7e20-4979-be17-e263bc5949c0-kube-api-access-ctz7m\") pod \"glance-374b-account-create-update-nmdgk\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.648584 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x7vsq"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.650455 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.668046 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x7vsq"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.692921 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2a45-account-create-update-cv72r"] Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.715384 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.729653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42982fed-6c85-4967-831f-3d1f7715fa5f-operator-scripts\") pod \"neutron-db-create-x7vsq\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.729730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-operator-scripts\") pod \"neutron-2a45-account-create-update-cv72r\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.729769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87wmb\" (UniqueName: \"kubernetes.io/projected/42982fed-6c85-4967-831f-3d1f7715fa5f-kube-api-access-87wmb\") pod \"neutron-db-create-x7vsq\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.729813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xrp\" (UniqueName: \"kubernetes.io/projected/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-kube-api-access-j7xrp\") pod \"neutron-2a45-account-create-update-cv72r\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.831887 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gjm29" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.832669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42982fed-6c85-4967-831f-3d1f7715fa5f-operator-scripts\") pod \"neutron-db-create-x7vsq\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.832754 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-operator-scripts\") pod \"neutron-2a45-account-create-update-cv72r\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.832790 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87wmb\" (UniqueName: \"kubernetes.io/projected/42982fed-6c85-4967-831f-3d1f7715fa5f-kube-api-access-87wmb\") pod \"neutron-db-create-x7vsq\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.832849 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7xrp\" (UniqueName: \"kubernetes.io/projected/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-kube-api-access-j7xrp\") pod \"neutron-2a45-account-create-update-cv72r\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.833774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-operator-scripts\") pod \"neutron-2a45-account-create-update-cv72r\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.834022 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42982fed-6c85-4967-831f-3d1f7715fa5f-operator-scripts\") pod \"neutron-db-create-x7vsq\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.855828 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7xrp\" (UniqueName: \"kubernetes.io/projected/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-kube-api-access-j7xrp\") pod \"neutron-2a45-account-create-update-cv72r\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.862063 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87wmb\" (UniqueName: \"kubernetes.io/projected/42982fed-6c85-4967-831f-3d1f7715fa5f-kube-api-access-87wmb\") pod \"neutron-db-create-x7vsq\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:47 crc kubenswrapper[4858]: I0127 20:28:47.891337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:47.989588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.014108 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.275862 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gjm29"] Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.303519 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-nwnrz"] Jan 27 20:28:48 crc kubenswrapper[4858]: W0127 20:28:48.325710 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e49e861_a431_4a8f_8864_9672d699d9a0.slice/crio-6fc6b0f46a38dad050360e6fda3f23c9e65314f4e039911a64f82b0e5396dcfd WatchSource:0}: Error finding container 6fc6b0f46a38dad050360e6fda3f23c9e65314f4e039911a64f82b0e5396dcfd: Status 404 returned error can't find the container with id 6fc6b0f46a38dad050360e6fda3f23c9e65314f4e039911a64f82b0e5396dcfd Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.527820 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gjm29" event={"ID":"f9a9d6ee-6747-442f-a1d8-222920bef47e","Type":"ContainerStarted","Data":"3ffa5f2a89f0c71aede6a2ac0b7aacedee2d339bc4b07dcd4315f58570ccf22b"} Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.527894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gjm29" event={"ID":"f9a9d6ee-6747-442f-a1d8-222920bef47e","Type":"ContainerStarted","Data":"eca379b54644103c8f6cd593e5790d7174f3ce507b48376e4a5f79b455ddde4f"} Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.534974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-nwnrz" event={"ID":"7e49e861-a431-4a8f-8864-9672d699d9a0","Type":"ContainerStarted","Data":"6fc6b0f46a38dad050360e6fda3f23c9e65314f4e039911a64f82b0e5396dcfd"} Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.561592 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-gjm29" podStartSLOduration=1.561564593 podStartE2EDuration="1.561564593s" podCreationTimestamp="2026-01-27 20:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:48.551047878 +0000 UTC m=+1273.258863614" watchObservedRunningTime="2026-01-27 20:28:48.561564593 +0000 UTC m=+1273.269380299" Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.626980 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-374b-account-create-update-nmdgk"] Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.658685 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2a45-account-create-update-cv72r"] Jan 27 20:28:48 crc kubenswrapper[4858]: I0127 20:28:48.744890 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x7vsq"] Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.205140 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.208486 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.241067 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.269296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z7tl\" (UniqueName: \"kubernetes.io/projected/3dc3a612-b94e-44d8-a37f-7788316b9156-kube-api-access-6z7tl\") pod \"3dc3a612-b94e-44d8-a37f-7788316b9156\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.269376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc3a612-b94e-44d8-a37f-7788316b9156-operator-scripts\") pod \"3dc3a612-b94e-44d8-a37f-7788316b9156\" (UID: \"3dc3a612-b94e-44d8-a37f-7788316b9156\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.269490 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dtkh\" (UniqueName: \"kubernetes.io/projected/d33fccf1-31e1-48da-9f19-598e521a8357-kube-api-access-9dtkh\") pod \"d33fccf1-31e1-48da-9f19-598e521a8357\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.269525 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab95708-4d18-4482-be83-b9f184b8a8f0-operator-scripts\") pod \"3ab95708-4d18-4482-be83-b9f184b8a8f0\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.270737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dc3a612-b94e-44d8-a37f-7788316b9156-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dc3a612-b94e-44d8-a37f-7788316b9156" (UID: "3dc3a612-b94e-44d8-a37f-7788316b9156"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.271780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab95708-4d18-4482-be83-b9f184b8a8f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ab95708-4d18-4482-be83-b9f184b8a8f0" (UID: "3ab95708-4d18-4482-be83-b9f184b8a8f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.278026 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d33fccf1-31e1-48da-9f19-598e521a8357-operator-scripts\") pod \"d33fccf1-31e1-48da-9f19-598e521a8357\" (UID: \"d33fccf1-31e1-48da-9f19-598e521a8357\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.278172 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z59l\" (UniqueName: \"kubernetes.io/projected/3ab95708-4d18-4482-be83-b9f184b8a8f0-kube-api-access-7z59l\") pod \"3ab95708-4d18-4482-be83-b9f184b8a8f0\" (UID: \"3ab95708-4d18-4482-be83-b9f184b8a8f0\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.278941 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dc3a612-b94e-44d8-a37f-7788316b9156-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.278973 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ab95708-4d18-4482-be83-b9f184b8a8f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.279305 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d33fccf1-31e1-48da-9f19-598e521a8357-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d33fccf1-31e1-48da-9f19-598e521a8357" (UID: "d33fccf1-31e1-48da-9f19-598e521a8357"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.293913 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.295962 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc3a612-b94e-44d8-a37f-7788316b9156-kube-api-access-6z7tl" (OuterVolumeSpecName: "kube-api-access-6z7tl") pod "3dc3a612-b94e-44d8-a37f-7788316b9156" (UID: "3dc3a612-b94e-44d8-a37f-7788316b9156"). InnerVolumeSpecName "kube-api-access-6z7tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.296536 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33fccf1-31e1-48da-9f19-598e521a8357-kube-api-access-9dtkh" (OuterVolumeSpecName: "kube-api-access-9dtkh") pod "d33fccf1-31e1-48da-9f19-598e521a8357" (UID: "d33fccf1-31e1-48da-9f19-598e521a8357"). InnerVolumeSpecName "kube-api-access-9dtkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.296747 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab95708-4d18-4482-be83-b9f184b8a8f0-kube-api-access-7z59l" (OuterVolumeSpecName: "kube-api-access-7z59l") pod "3ab95708-4d18-4482-be83-b9f184b8a8f0" (UID: "3ab95708-4d18-4482-be83-b9f184b8a8f0"). InnerVolumeSpecName "kube-api-access-7z59l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.380248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc8m4\" (UniqueName: \"kubernetes.io/projected/06e7fc25-3533-452e-b1cd-1bb63cf92b60-kube-api-access-qc8m4\") pod \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.380368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e7fc25-3533-452e-b1cd-1bb63cf92b60-operator-scripts\") pod \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\" (UID: \"06e7fc25-3533-452e-b1cd-1bb63cf92b60\") " Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.380861 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z59l\" (UniqueName: \"kubernetes.io/projected/3ab95708-4d18-4482-be83-b9f184b8a8f0-kube-api-access-7z59l\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.380878 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z7tl\" (UniqueName: \"kubernetes.io/projected/3dc3a612-b94e-44d8-a37f-7788316b9156-kube-api-access-6z7tl\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.380889 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dtkh\" (UniqueName: \"kubernetes.io/projected/d33fccf1-31e1-48da-9f19-598e521a8357-kube-api-access-9dtkh\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.380899 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d33fccf1-31e1-48da-9f19-598e521a8357-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.381013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e7fc25-3533-452e-b1cd-1bb63cf92b60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06e7fc25-3533-452e-b1cd-1bb63cf92b60" (UID: "06e7fc25-3533-452e-b1cd-1bb63cf92b60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.388133 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e7fc25-3533-452e-b1cd-1bb63cf92b60-kube-api-access-qc8m4" (OuterVolumeSpecName: "kube-api-access-qc8m4") pod "06e7fc25-3533-452e-b1cd-1bb63cf92b60" (UID: "06e7fc25-3533-452e-b1cd-1bb63cf92b60"). InnerVolumeSpecName "kube-api-access-qc8m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.482932 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc8m4\" (UniqueName: \"kubernetes.io/projected/06e7fc25-3533-452e-b1cd-1bb63cf92b60-kube-api-access-qc8m4\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.482983 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e7fc25-3533-452e-b1cd-1bb63cf92b60-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.547962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7vsq" event={"ID":"42982fed-6c85-4967-831f-3d1f7715fa5f","Type":"ContainerStarted","Data":"e0a00ff742d3e6d9379a6633640d3cab5ae790117688d92bbce7853bda95f6ef"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.548036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7vsq" event={"ID":"42982fed-6c85-4967-831f-3d1f7715fa5f","Type":"ContainerStarted","Data":"1524272df3f6319d9529ea6ab6e18ab42704e2635c7f43ab6ab42a94df3c6560"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.551936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-374b-account-create-update-nmdgk" event={"ID":"b2596f4b-7e20-4979-be17-e263bc5949c0","Type":"ContainerStarted","Data":"331471801af9273ef2d4ae91fd9140271c70e1c3fa51cc85bd69670ec96a4e1c"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.551993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-374b-account-create-update-nmdgk" event={"ID":"b2596f4b-7e20-4979-be17-e263bc5949c0","Type":"ContainerStarted","Data":"3570734bd1f5a66a98d57fb8e40ff3fabb0d03ecdc38c5e7c1725dd7b8bfb312"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.555038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2a45-account-create-update-cv72r" event={"ID":"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3","Type":"ContainerStarted","Data":"7bea31aa778d39822f81cd4a51e08e662f3c7eadbf65b96acd6fc50dde7ef951"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.555108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2a45-account-create-update-cv72r" event={"ID":"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3","Type":"ContainerStarted","Data":"76c57041e79ad1ca4584472080316ea51c252637d2cc880c25f52d8d49715b61"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.562168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ch6lv" event={"ID":"3dc3a612-b94e-44d8-a37f-7788316b9156","Type":"ContainerDied","Data":"f7082878f27a5fc9e7c50cc0630f7e01012deb0685c062e41f6538c8a13aab82"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.562200 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ch6lv" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.562247 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7082878f27a5fc9e7c50cc0630f7e01012deb0685c062e41f6538c8a13aab82" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.567278 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-x7vsq" podStartSLOduration=2.567256063 podStartE2EDuration="2.567256063s" podCreationTimestamp="2026-01-27 20:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:49.560832137 +0000 UTC m=+1274.268647863" watchObservedRunningTime="2026-01-27 20:28:49.567256063 +0000 UTC m=+1274.275071759" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.567624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cc4d-account-create-update-d2jtl" event={"ID":"06e7fc25-3533-452e-b1cd-1bb63cf92b60","Type":"ContainerDied","Data":"b79faf9816ab27e5b55c575ab47c78a20e7dc6419608bc68bbb65bc6366ec9b2"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.567667 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b79faf9816ab27e5b55c575ab47c78a20e7dc6419608bc68bbb65bc6366ec9b2" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.567756 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc4d-account-create-update-d2jtl" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.571710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-364b-account-create-update-csrfb" event={"ID":"3ab95708-4d18-4482-be83-b9f184b8a8f0","Type":"ContainerDied","Data":"8028882f3c75c7d400211f9c7e84a7a2481415d3ee8a808aebe3d5834e25967c"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.571765 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8028882f3c75c7d400211f9c7e84a7a2481415d3ee8a808aebe3d5834e25967c" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.571824 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-364b-account-create-update-csrfb" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.575806 4858 generic.go:334] "Generic (PLEG): container finished" podID="f9a9d6ee-6747-442f-a1d8-222920bef47e" containerID="3ffa5f2a89f0c71aede6a2ac0b7aacedee2d339bc4b07dcd4315f58570ccf22b" exitCode=0 Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.575914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gjm29" event={"ID":"f9a9d6ee-6747-442f-a1d8-222920bef47e","Type":"ContainerDied","Data":"3ffa5f2a89f0c71aede6a2ac0b7aacedee2d339bc4b07dcd4315f58570ccf22b"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.578077 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hh25x" event={"ID":"d33fccf1-31e1-48da-9f19-598e521a8357","Type":"ContainerDied","Data":"38e5415c54ae8a1726efb7ede6c264ce0443dcd9182336c61863da2eebc974ba"} Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.578109 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e5415c54ae8a1726efb7ede6c264ce0443dcd9182336c61863da2eebc974ba" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.578172 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hh25x" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.602523 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2a45-account-create-update-cv72r" podStartSLOduration=2.602494375 podStartE2EDuration="2.602494375s" podCreationTimestamp="2026-01-27 20:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:49.585738089 +0000 UTC m=+1274.293553805" watchObservedRunningTime="2026-01-27 20:28:49.602494375 +0000 UTC m=+1274.310310081" Jan 27 20:28:49 crc kubenswrapper[4858]: I0127 20:28:49.615342 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-374b-account-create-update-nmdgk" podStartSLOduration=2.615316117 podStartE2EDuration="2.615316117s" podCreationTimestamp="2026-01-27 20:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:28:49.601110305 +0000 UTC m=+1274.308926011" watchObservedRunningTime="2026-01-27 20:28:49.615316117 +0000 UTC m=+1274.323131823" Jan 27 20:28:50 crc kubenswrapper[4858]: I0127 20:28:50.616267 4858 generic.go:334] "Generic (PLEG): container finished" podID="42982fed-6c85-4967-831f-3d1f7715fa5f" containerID="e0a00ff742d3e6d9379a6633640d3cab5ae790117688d92bbce7853bda95f6ef" exitCode=0 Jan 27 20:28:50 crc kubenswrapper[4858]: I0127 20:28:50.616344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7vsq" event={"ID":"42982fed-6c85-4967-831f-3d1f7715fa5f","Type":"ContainerDied","Data":"e0a00ff742d3e6d9379a6633640d3cab5ae790117688d92bbce7853bda95f6ef"} Jan 27 20:28:50 crc kubenswrapper[4858]: I0127 20:28:50.621336 4858 generic.go:334] "Generic (PLEG): container finished" podID="b2596f4b-7e20-4979-be17-e263bc5949c0" containerID="331471801af9273ef2d4ae91fd9140271c70e1c3fa51cc85bd69670ec96a4e1c" exitCode=0 Jan 27 20:28:50 crc kubenswrapper[4858]: I0127 20:28:50.621441 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-374b-account-create-update-nmdgk" event={"ID":"b2596f4b-7e20-4979-be17-e263bc5949c0","Type":"ContainerDied","Data":"331471801af9273ef2d4ae91fd9140271c70e1c3fa51cc85bd69670ec96a4e1c"} Jan 27 20:28:50 crc kubenswrapper[4858]: I0127 20:28:50.623472 4858 generic.go:334] "Generic (PLEG): container finished" podID="1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3" containerID="7bea31aa778d39822f81cd4a51e08e662f3c7eadbf65b96acd6fc50dde7ef951" exitCode=0 Jan 27 20:28:50 crc kubenswrapper[4858]: I0127 20:28:50.623618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2a45-account-create-update-cv72r" event={"ID":"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3","Type":"ContainerDied","Data":"7bea31aa778d39822f81cd4a51e08e662f3c7eadbf65b96acd6fc50dde7ef951"} Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.305469 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.320440 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.321382 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.346075 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gjm29" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.372416 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctz7m\" (UniqueName: \"kubernetes.io/projected/b2596f4b-7e20-4979-be17-e263bc5949c0-kube-api-access-ctz7m\") pod \"b2596f4b-7e20-4979-be17-e263bc5949c0\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.372832 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42982fed-6c85-4967-831f-3d1f7715fa5f-operator-scripts\") pod \"42982fed-6c85-4967-831f-3d1f7715fa5f\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.372923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87wmb\" (UniqueName: \"kubernetes.io/projected/42982fed-6c85-4967-831f-3d1f7715fa5f-kube-api-access-87wmb\") pod \"42982fed-6c85-4967-831f-3d1f7715fa5f\" (UID: \"42982fed-6c85-4967-831f-3d1f7715fa5f\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.373019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2596f4b-7e20-4979-be17-e263bc5949c0-operator-scripts\") pod \"b2596f4b-7e20-4979-be17-e263bc5949c0\" (UID: \"b2596f4b-7e20-4979-be17-e263bc5949c0\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.373072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-operator-scripts\") pod \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.373187 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7xrp\" (UniqueName: \"kubernetes.io/projected/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-kube-api-access-j7xrp\") pod \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\" (UID: \"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.373844 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42982fed-6c85-4967-831f-3d1f7715fa5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42982fed-6c85-4967-831f-3d1f7715fa5f" (UID: "42982fed-6c85-4967-831f-3d1f7715fa5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.374517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2596f4b-7e20-4979-be17-e263bc5949c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2596f4b-7e20-4979-be17-e263bc5949c0" (UID: "b2596f4b-7e20-4979-be17-e263bc5949c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.377270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3" (UID: "1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.384194 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-kube-api-access-j7xrp" (OuterVolumeSpecName: "kube-api-access-j7xrp") pod "1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3" (UID: "1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3"). InnerVolumeSpecName "kube-api-access-j7xrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.384320 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42982fed-6c85-4967-831f-3d1f7715fa5f-kube-api-access-87wmb" (OuterVolumeSpecName: "kube-api-access-87wmb") pod "42982fed-6c85-4967-831f-3d1f7715fa5f" (UID: "42982fed-6c85-4967-831f-3d1f7715fa5f"). InnerVolumeSpecName "kube-api-access-87wmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.384955 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2596f4b-7e20-4979-be17-e263bc5949c0-kube-api-access-ctz7m" (OuterVolumeSpecName: "kube-api-access-ctz7m") pod "b2596f4b-7e20-4979-be17-e263bc5949c0" (UID: "b2596f4b-7e20-4979-be17-e263bc5949c0"). InnerVolumeSpecName "kube-api-access-ctz7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.475086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhhx\" (UniqueName: \"kubernetes.io/projected/f9a9d6ee-6747-442f-a1d8-222920bef47e-kube-api-access-jrhhx\") pod \"f9a9d6ee-6747-442f-a1d8-222920bef47e\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.475248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a9d6ee-6747-442f-a1d8-222920bef47e-operator-scripts\") pod \"f9a9d6ee-6747-442f-a1d8-222920bef47e\" (UID: \"f9a9d6ee-6747-442f-a1d8-222920bef47e\") " Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.475695 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9a9d6ee-6747-442f-a1d8-222920bef47e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9a9d6ee-6747-442f-a1d8-222920bef47e" (UID: "f9a9d6ee-6747-442f-a1d8-222920bef47e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.476393 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctz7m\" (UniqueName: \"kubernetes.io/projected/b2596f4b-7e20-4979-be17-e263bc5949c0-kube-api-access-ctz7m\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.476410 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42982fed-6c85-4967-831f-3d1f7715fa5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.476428 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87wmb\" (UniqueName: \"kubernetes.io/projected/42982fed-6c85-4967-831f-3d1f7715fa5f-kube-api-access-87wmb\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.476439 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2596f4b-7e20-4979-be17-e263bc5949c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.476447 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9a9d6ee-6747-442f-a1d8-222920bef47e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.476457 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.476466 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7xrp\" (UniqueName: \"kubernetes.io/projected/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3-kube-api-access-j7xrp\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.479481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a9d6ee-6747-442f-a1d8-222920bef47e-kube-api-access-jrhhx" (OuterVolumeSpecName: "kube-api-access-jrhhx") pod "f9a9d6ee-6747-442f-a1d8-222920bef47e" (UID: "f9a9d6ee-6747-442f-a1d8-222920bef47e"). InnerVolumeSpecName "kube-api-access-jrhhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.579015 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhhx\" (UniqueName: \"kubernetes.io/projected/f9a9d6ee-6747-442f-a1d8-222920bef47e-kube-api-access-jrhhx\") on node \"crc\" DevicePath \"\"" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.658212 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x7vsq" event={"ID":"42982fed-6c85-4967-831f-3d1f7715fa5f","Type":"ContainerDied","Data":"1524272df3f6319d9529ea6ab6e18ab42704e2635c7f43ab6ab42a94df3c6560"} Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.658267 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1524272df3f6319d9529ea6ab6e18ab42704e2635c7f43ab6ab42a94df3c6560" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.658356 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x7vsq" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.660346 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-374b-account-create-update-nmdgk" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.660345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-374b-account-create-update-nmdgk" event={"ID":"b2596f4b-7e20-4979-be17-e263bc5949c0","Type":"ContainerDied","Data":"3570734bd1f5a66a98d57fb8e40ff3fabb0d03ecdc38c5e7c1725dd7b8bfb312"} Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.660448 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3570734bd1f5a66a98d57fb8e40ff3fabb0d03ecdc38c5e7c1725dd7b8bfb312" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.662387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2a45-account-create-update-cv72r" event={"ID":"1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3","Type":"ContainerDied","Data":"76c57041e79ad1ca4584472080316ea51c252637d2cc880c25f52d8d49715b61"} Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.662437 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76c57041e79ad1ca4584472080316ea51c252637d2cc880c25f52d8d49715b61" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.662502 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2a45-account-create-update-cv72r" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.675827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gjm29" event={"ID":"f9a9d6ee-6747-442f-a1d8-222920bef47e","Type":"ContainerDied","Data":"eca379b54644103c8f6cd593e5790d7174f3ce507b48376e4a5f79b455ddde4f"} Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.675874 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca379b54644103c8f6cd593e5790d7174f3ce507b48376e4a5f79b455ddde4f" Jan 27 20:28:53 crc kubenswrapper[4858]: I0127 20:28:53.676002 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gjm29" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.689508 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kr2j4"] Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.690870 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.690888 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.690905 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab95708-4d18-4482-be83-b9f184b8a8f0" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.690914 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab95708-4d18-4482-be83-b9f184b8a8f0" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.690931 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2596f4b-7e20-4979-be17-e263bc5949c0" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.690943 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2596f4b-7e20-4979-be17-e263bc5949c0" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.690966 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a9d6ee-6747-442f-a1d8-222920bef47e" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.690974 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a9d6ee-6747-442f-a1d8-222920bef47e" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.690989 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc3a612-b94e-44d8-a37f-7788316b9156" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.690997 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc3a612-b94e-44d8-a37f-7788316b9156" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.691025 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42982fed-6c85-4967-831f-3d1f7715fa5f" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691033 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="42982fed-6c85-4967-831f-3d1f7715fa5f" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.691045 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e7fc25-3533-452e-b1cd-1bb63cf92b60" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691056 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e7fc25-3533-452e-b1cd-1bb63cf92b60" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: E0127 20:28:57.691083 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33fccf1-31e1-48da-9f19-598e521a8357" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691095 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33fccf1-31e1-48da-9f19-598e521a8357" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691352 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e7fc25-3533-452e-b1cd-1bb63cf92b60" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691379 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691399 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dc3a612-b94e-44d8-a37f-7788316b9156" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691420 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2596f4b-7e20-4979-be17-e263bc5949c0" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691438 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a9d6ee-6747-442f-a1d8-222920bef47e" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691451 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33fccf1-31e1-48da-9f19-598e521a8357" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691462 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="42982fed-6c85-4967-831f-3d1f7715fa5f" containerName="mariadb-database-create" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.691481 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab95708-4d18-4482-be83-b9f184b8a8f0" containerName="mariadb-account-create-update" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.692954 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.696519 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xtrkf" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.696581 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.701701 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kr2j4"] Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.713292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fzmw" event={"ID":"0d3e3875-21c2-42e2-ba9b-ed981baab427","Type":"ContainerStarted","Data":"57d3595fa18d7acd981296d10cd97feb9df3ad70d3e95e9fbff8aa547bcd1155"} Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.716153 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-nwnrz" event={"ID":"7e49e861-a431-4a8f-8864-9672d699d9a0","Type":"ContainerStarted","Data":"8c6f297041903903e71142d5906feb61449bc030914e1952791d9034cf5285ef"} Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.750813 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5fzmw" podStartSLOduration=2.636767716 podStartE2EDuration="12.750789854s" podCreationTimestamp="2026-01-27 20:28:45 +0000 UTC" firstStartedPulling="2026-01-27 20:28:46.885814092 +0000 UTC m=+1271.593629798" lastFinishedPulling="2026-01-27 20:28:56.99983623 +0000 UTC m=+1281.707651936" observedRunningTime="2026-01-27 20:28:57.739010893 +0000 UTC m=+1282.446826619" watchObservedRunningTime="2026-01-27 20:28:57.750789854 +0000 UTC m=+1282.458605560" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.752630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-config-data\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.752776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9kf\" (UniqueName: \"kubernetes.io/projected/e04ce574-5470-43ae-8207-fb01bd98805f-kube-api-access-9v9kf\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.752840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-db-sync-config-data\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.752908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-combined-ca-bundle\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.774097 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-nwnrz" podStartSLOduration=2.023680303 podStartE2EDuration="10.774051969s" podCreationTimestamp="2026-01-27 20:28:47 +0000 UTC" firstStartedPulling="2026-01-27 20:28:48.329320389 +0000 UTC m=+1273.037136095" lastFinishedPulling="2026-01-27 20:28:57.079692055 +0000 UTC m=+1281.787507761" observedRunningTime="2026-01-27 20:28:57.758470547 +0000 UTC m=+1282.466286253" watchObservedRunningTime="2026-01-27 20:28:57.774051969 +0000 UTC m=+1282.481867675" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.855058 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-config-data\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.855181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9kf\" (UniqueName: \"kubernetes.io/projected/e04ce574-5470-43ae-8207-fb01bd98805f-kube-api-access-9v9kf\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.855233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-db-sync-config-data\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.855289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-combined-ca-bundle\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.860810 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-db-sync-config-data\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.862725 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-config-data\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.862942 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-combined-ca-bundle\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:57 crc kubenswrapper[4858]: I0127 20:28:57.887579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9kf\" (UniqueName: \"kubernetes.io/projected/e04ce574-5470-43ae-8207-fb01bd98805f-kube-api-access-9v9kf\") pod \"glance-db-sync-kr2j4\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:58 crc kubenswrapper[4858]: I0127 20:28:58.015133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kr2j4" Jan 27 20:28:59 crc kubenswrapper[4858]: I0127 20:28:58.599874 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kr2j4"] Jan 27 20:28:59 crc kubenswrapper[4858]: I0127 20:28:58.731973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kr2j4" event={"ID":"e04ce574-5470-43ae-8207-fb01bd98805f","Type":"ContainerStarted","Data":"71b9aac5bebb8d78b029d6fbddb3b4f1e06ecfb2be44c838ad0b09776729313c"} Jan 27 20:29:04 crc kubenswrapper[4858]: E0127 20:29:04.240652 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e49e861_a431_4a8f_8864_9672d699d9a0.slice/crio-conmon-8c6f297041903903e71142d5906feb61449bc030914e1952791d9034cf5285ef.scope\": RecentStats: unable to find data in memory cache]" Jan 27 20:29:04 crc kubenswrapper[4858]: I0127 20:29:04.787149 4858 generic.go:334] "Generic (PLEG): container finished" podID="7e49e861-a431-4a8f-8864-9672d699d9a0" containerID="8c6f297041903903e71142d5906feb61449bc030914e1952791d9034cf5285ef" exitCode=0 Jan 27 20:29:04 crc kubenswrapper[4858]: I0127 20:29:04.787219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-nwnrz" event={"ID":"7e49e861-a431-4a8f-8864-9672d699d9a0","Type":"ContainerDied","Data":"8c6f297041903903e71142d5906feb61449bc030914e1952791d9034cf5285ef"} Jan 27 20:29:05 crc kubenswrapper[4858]: I0127 20:29:05.798797 4858 generic.go:334] "Generic (PLEG): container finished" podID="0d3e3875-21c2-42e2-ba9b-ed981baab427" containerID="57d3595fa18d7acd981296d10cd97feb9df3ad70d3e95e9fbff8aa547bcd1155" exitCode=0 Jan 27 20:29:05 crc kubenswrapper[4858]: I0127 20:29:05.798798 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fzmw" event={"ID":"0d3e3875-21c2-42e2-ba9b-ed981baab427","Type":"ContainerDied","Data":"57d3595fa18d7acd981296d10cd97feb9df3ad70d3e95e9fbff8aa547bcd1155"} Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.384334 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.393109 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.405255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-config-data\") pod \"7e49e861-a431-4a8f-8864-9672d699d9a0\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.405358 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-config-data\") pod \"0d3e3875-21c2-42e2-ba9b-ed981baab427\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.405483 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmsds\" (UniqueName: \"kubernetes.io/projected/0d3e3875-21c2-42e2-ba9b-ed981baab427-kube-api-access-hmsds\") pod \"0d3e3875-21c2-42e2-ba9b-ed981baab427\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.405615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv4qb\" (UniqueName: \"kubernetes.io/projected/7e49e861-a431-4a8f-8864-9672d699d9a0-kube-api-access-tv4qb\") pod \"7e49e861-a431-4a8f-8864-9672d699d9a0\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.405677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-db-sync-config-data\") pod \"7e49e861-a431-4a8f-8864-9672d699d9a0\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.405770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-combined-ca-bundle\") pod \"0d3e3875-21c2-42e2-ba9b-ed981baab427\" (UID: \"0d3e3875-21c2-42e2-ba9b-ed981baab427\") " Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.405920 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-combined-ca-bundle\") pod \"7e49e861-a431-4a8f-8864-9672d699d9a0\" (UID: \"7e49e861-a431-4a8f-8864-9672d699d9a0\") " Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.418084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3e3875-21c2-42e2-ba9b-ed981baab427-kube-api-access-hmsds" (OuterVolumeSpecName: "kube-api-access-hmsds") pod "0d3e3875-21c2-42e2-ba9b-ed981baab427" (UID: "0d3e3875-21c2-42e2-ba9b-ed981baab427"). InnerVolumeSpecName "kube-api-access-hmsds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.444489 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d3e3875-21c2-42e2-ba9b-ed981baab427" (UID: "0d3e3875-21c2-42e2-ba9b-ed981baab427"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.445308 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e49e861-a431-4a8f-8864-9672d699d9a0-kube-api-access-tv4qb" (OuterVolumeSpecName: "kube-api-access-tv4qb") pod "7e49e861-a431-4a8f-8864-9672d699d9a0" (UID: "7e49e861-a431-4a8f-8864-9672d699d9a0"). InnerVolumeSpecName "kube-api-access-tv4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.447263 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7e49e861-a431-4a8f-8864-9672d699d9a0" (UID: "7e49e861-a431-4a8f-8864-9672d699d9a0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.456230 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e49e861-a431-4a8f-8864-9672d699d9a0" (UID: "7e49e861-a431-4a8f-8864-9672d699d9a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.489704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-config-data" (OuterVolumeSpecName: "config-data") pod "7e49e861-a431-4a8f-8864-9672d699d9a0" (UID: "7e49e861-a431-4a8f-8864-9672d699d9a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.492839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-config-data" (OuterVolumeSpecName: "config-data") pod "0d3e3875-21c2-42e2-ba9b-ed981baab427" (UID: "0d3e3875-21c2-42e2-ba9b-ed981baab427"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.511806 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.511836 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.511851 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmsds\" (UniqueName: \"kubernetes.io/projected/0d3e3875-21c2-42e2-ba9b-ed981baab427-kube-api-access-hmsds\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.511862 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv4qb\" (UniqueName: \"kubernetes.io/projected/7e49e861-a431-4a8f-8864-9672d699d9a0-kube-api-access-tv4qb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.511870 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.511879 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e3875-21c2-42e2-ba9b-ed981baab427-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.511887 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e49e861-a431-4a8f-8864-9672d699d9a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.869141 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-nwnrz" event={"ID":"7e49e861-a431-4a8f-8864-9672d699d9a0","Type":"ContainerDied","Data":"6fc6b0f46a38dad050360e6fda3f23c9e65314f4e039911a64f82b0e5396dcfd"} Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.869195 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fc6b0f46a38dad050360e6fda3f23c9e65314f4e039911a64f82b0e5396dcfd" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.869222 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-nwnrz" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.871228 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fzmw" event={"ID":"0d3e3875-21c2-42e2-ba9b-ed981baab427","Type":"ContainerDied","Data":"debda5dd84e6a531202a260e333a4bb47e7301cf026556235030a102bc712571"} Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.871281 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="debda5dd84e6a531202a260e333a4bb47e7301cf026556235030a102bc712571" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.871346 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fzmw" Jan 27 20:29:11 crc kubenswrapper[4858]: I0127 20:29:11.873162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kr2j4" event={"ID":"e04ce574-5470-43ae-8207-fb01bd98805f","Type":"ContainerStarted","Data":"4ce9da19d3b4a24b22a6cbac46ac50f7eb358a3795641085303a4e9443f58bd5"} Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.433295 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kr2j4" podStartSLOduration=2.821267281 podStartE2EDuration="15.433274949s" podCreationTimestamp="2026-01-27 20:28:57 +0000 UTC" firstStartedPulling="2026-01-27 20:28:58.624828498 +0000 UTC m=+1283.332644204" lastFinishedPulling="2026-01-27 20:29:11.236836166 +0000 UTC m=+1295.944651872" observedRunningTime="2026-01-27 20:29:11.888931665 +0000 UTC m=+1296.596747391" watchObservedRunningTime="2026-01-27 20:29:12.433274949 +0000 UTC m=+1297.141090655" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.715910 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gb52w"] Jan 27 20:29:12 crc kubenswrapper[4858]: E0127 20:29:12.719330 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e49e861-a431-4a8f-8864-9672d699d9a0" containerName="watcher-db-sync" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.719384 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e49e861-a431-4a8f-8864-9672d699d9a0" containerName="watcher-db-sync" Jan 27 20:29:12 crc kubenswrapper[4858]: E0127 20:29:12.719410 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3e3875-21c2-42e2-ba9b-ed981baab427" containerName="keystone-db-sync" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.719419 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3e3875-21c2-42e2-ba9b-ed981baab427" containerName="keystone-db-sync" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.719874 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3e3875-21c2-42e2-ba9b-ed981baab427" containerName="keystone-db-sync" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.719903 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e49e861-a431-4a8f-8864-9672d699d9a0" containerName="watcher-db-sync" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.720869 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.725360 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8498f5cc8c-dwdxx"] Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.727696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.731344 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.731567 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.731756 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.738415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-fernet-keys\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.738471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-scripts\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.738502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-config-data\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.738523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-credential-keys\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.738583 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-combined-ca-bundle\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.738613 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ptbg\" (UniqueName: \"kubernetes.io/projected/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-kube-api-access-8ptbg\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.742321 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4vbp" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.744198 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.768017 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8498f5cc8c-dwdxx"] Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.781014 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gb52w"] Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.840762 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-config-data\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.841067 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-credential-keys\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.841213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-config\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.841466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-combined-ca-bundle\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.841581 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ptbg\" (UniqueName: \"kubernetes.io/projected/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-kube-api-access-8ptbg\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.841728 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-sb\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.841839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-nb\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.841920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxz7t\" (UniqueName: \"kubernetes.io/projected/30594d2d-c274-480d-849d-a692b37ba029-kube-api-access-mxz7t\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.842024 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-svc\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.842139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-fernet-keys\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.842251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-swift-storage-0\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.842372 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-scripts\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.851285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-credential-keys\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.856125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-combined-ca-bundle\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.863268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-fernet-keys\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.871834 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-config-data\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.875041 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-scripts\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.935297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ptbg\" (UniqueName: \"kubernetes.io/projected/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-kube-api-access-8ptbg\") pod \"keystone-bootstrap-gb52w\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.947390 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-swift-storage-0\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.947597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-config\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.947868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-sb\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.947935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-nb\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.947963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxz7t\" (UniqueName: \"kubernetes.io/projected/30594d2d-c274-480d-849d-a692b37ba029-kube-api-access-mxz7t\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.948046 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-svc\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.949284 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-svc\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.949913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-swift-storage-0\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.950462 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-config\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.953412 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-sb\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:12 crc kubenswrapper[4858]: I0127 20:29:12.954816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-nb\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.020247 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.022119 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.025541 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxz7t\" (UniqueName: \"kubernetes.io/projected/30594d2d-c274-480d-849d-a692b37ba029-kube-api-access-mxz7t\") pod \"dnsmasq-dns-8498f5cc8c-dwdxx\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.049466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b757c9de-8297-419d-9048-72cdf387c52d-logs\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.049517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.049672 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-config-data\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.049700 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8kv\" (UniqueName: \"kubernetes.io/projected/b757c9de-8297-419d-9048-72cdf387c52d-kube-api-access-6c8kv\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.060631 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.070494 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.073295 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.077691 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-gt8qw" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.080721 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.091702 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nsgb9"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.092957 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.112444 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.112910 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9thbj" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.113431 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8kv\" (UniqueName: \"kubernetes.io/projected/b757c9de-8297-419d-9048-72cdf387c52d-kube-api-access-6c8kv\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153331 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-db-sync-config-data\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153381 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-scripts\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtxf\" (UniqueName: \"kubernetes.io/projected/8222b78c-e8de-4992-8c5b-bcf030d629ff-kube-api-access-wdtxf\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153504 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-config-data\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153535 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-combined-ca-bundle\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153590 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b757c9de-8297-419d-9048-72cdf387c52d-logs\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153624 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8222b78c-e8de-4992-8c5b-bcf030d629ff-etc-machine-id\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.153708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-config-data\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.154126 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jlwch"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.155446 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.156646 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b757c9de-8297-419d-9048-72cdf387c52d-logs\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.174833 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.175947 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.176641 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.176758 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sk9xg" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.194895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-config-data\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.199216 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nsgb9"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.255923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdtxf\" (UniqueName: \"kubernetes.io/projected/8222b78c-e8de-4992-8c5b-bcf030d629ff-kube-api-access-wdtxf\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256022 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-config-data\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-combined-ca-bundle\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256110 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46phc\" (UniqueName: \"kubernetes.io/projected/734f1877-8907-44ff-b8af-c1a5f1b1395d-kube-api-access-46phc\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-config\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8222b78c-e8de-4992-8c5b-bcf030d629ff-etc-machine-id\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-combined-ca-bundle\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-db-sync-config-data\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.256468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-scripts\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.271259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8222b78c-e8de-4992-8c5b-bcf030d629ff-etc-machine-id\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.271936 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jlwch"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.272827 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-scripts\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.277153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-config-data\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.278155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-db-sync-config-data\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.278256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-combined-ca-bundle\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.296768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8kv\" (UniqueName: \"kubernetes.io/projected/b757c9de-8297-419d-9048-72cdf387c52d-kube-api-access-6c8kv\") pod \"watcher-applier-0\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.330515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdtxf\" (UniqueName: \"kubernetes.io/projected/8222b78c-e8de-4992-8c5b-bcf030d629ff-kube-api-access-wdtxf\") pod \"cinder-db-sync-nsgb9\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.368913 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.378098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-config\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.378211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-combined-ca-bundle\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.378445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46phc\" (UniqueName: \"kubernetes.io/projected/734f1877-8907-44ff-b8af-c1a5f1b1395d-kube-api-access-46phc\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.381342 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-config\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.389144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-combined-ca-bundle\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.405516 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.407074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.439445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46phc\" (UniqueName: \"kubernetes.io/projected/734f1877-8907-44ff-b8af-c1a5f1b1395d-kube-api-access-46phc\") pod \"neutron-db-sync-jlwch\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.440797 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.491534 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-678cc97f57-w9dmc"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.503409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.507947 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.512853 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-x26mz" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.513290 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.513430 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.539685 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.612879 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.612958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.612996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.613228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk88c\" (UniqueName: \"kubernetes.io/projected/3278beeb-52a3-4351-92f1-839e98e59395-kube-api-access-gk88c\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.613465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278beeb-52a3-4351-92f1-839e98e59395-logs\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.677563 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.703131 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.728417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk88c\" (UniqueName: \"kubernetes.io/projected/3278beeb-52a3-4351-92f1-839e98e59395-kube-api-access-gk88c\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729046 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278beeb-52a3-4351-92f1-839e98e59395-logs\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729095 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-config-data\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f3aa248-8818-4e60-9946-16d08aecd5ab-horizon-secret-key\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-scripts\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f3aa248-8818-4e60-9946-16d08aecd5ab-logs\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx8v4\" (UniqueName: \"kubernetes.io/projected/0f3aa248-8818-4e60-9946-16d08aecd5ab-kube-api-access-kx8v4\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729577 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.729611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.733467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278beeb-52a3-4351-92f1-839e98e59395-logs\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.751966 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.752946 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.755364 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.756994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.762524 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.763133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.769479 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-678cc97f57-w9dmc"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.778449 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk88c\" (UniqueName: \"kubernetes.io/projected/3278beeb-52a3-4351-92f1-839e98e59395-kube-api-access-gk88c\") pod \"watcher-decision-engine-0\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.799109 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.813688 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6n2n5"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.818734 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.818989 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.821307 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.824756 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.825079 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fxzv9" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.825674 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831078 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831458 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vr4w\" (UniqueName: \"kubernetes.io/projected/047f39f4-e397-46e4-a998-4bf8060a1114-kube-api-access-9vr4w\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-config-data\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831597 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-combined-ca-bundle\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831734 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjffz\" (UniqueName: \"kubernetes.io/projected/568d0b2e-3e4c-493a-bbf3-68021c02efd2-kube-api-access-cjffz\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-scripts\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831793 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f3aa248-8818-4e60-9946-16d08aecd5ab-horizon-secret-key\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-scripts\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831835 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831875 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-log-httpd\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.831890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568d0b2e-3e4c-493a-bbf3-68021c02efd2-logs\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.832838 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-scripts\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.832967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-config-data\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.834808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzcl\" (UniqueName: \"kubernetes.io/projected/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-kube-api-access-6fzcl\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.834851 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f3aa248-8818-4e60-9946-16d08aecd5ab-logs\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.834889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-config-data\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.834920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-db-sync-config-data\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.834963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx8v4\" (UniqueName: \"kubernetes.io/projected/0f3aa248-8818-4e60-9946-16d08aecd5ab-kube-api-access-kx8v4\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.835150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-run-httpd\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.839276 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-config-data\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.839753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f3aa248-8818-4e60-9946-16d08aecd5ab-logs\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.849874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f3aa248-8818-4e60-9946-16d08aecd5ab-horizon-secret-key\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.852371 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6n2n5"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.860892 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.872685 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6544888b69-dvcr4"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.875565 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.877291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx8v4\" (UniqueName: \"kubernetes.io/projected/0f3aa248-8818-4e60-9946-16d08aecd5ab-kube-api-access-kx8v4\") pod \"horizon-678cc97f57-w9dmc\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.899821 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gc4mg"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.902858 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.906377 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-crgtq" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.906900 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.907116 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.912384 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6544888b69-dvcr4"] Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943431 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c7b1cd-a2a1-4bd2-a57c-715448327967-logs\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-combined-ca-bundle\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943539 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-config-data\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74707222-b7c2-4226-8df2-2459cb7d447c-logs\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943641 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjffz\" (UniqueName: \"kubernetes.io/projected/568d0b2e-3e4c-493a-bbf3-68021c02efd2-kube-api-access-cjffz\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-scripts\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943677 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74707222-b7c2-4226-8df2-2459cb7d447c-horizon-secret-key\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.943751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-log-httpd\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568d0b2e-3e4c-493a-bbf3-68021c02efd2-logs\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944753 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-scripts\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzcl\" (UniqueName: \"kubernetes.io/projected/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-kube-api-access-6fzcl\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-combined-ca-bundle\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944820 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-config-data\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-db-sync-config-data\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwn2j\" (UniqueName: \"kubernetes.io/projected/b7c7b1cd-a2a1-4bd2-a57c-715448327967-kube-api-access-gwn2j\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.944959 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj25\" (UniqueName: \"kubernetes.io/projected/74707222-b7c2-4226-8df2-2459cb7d447c-kube-api-access-njj25\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.947619 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-config-data\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.947731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-run-httpd\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.947781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-config-data\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.947837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vr4w\" (UniqueName: \"kubernetes.io/projected/047f39f4-e397-46e4-a998-4bf8060a1114-kube-api-access-9vr4w\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.947881 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-scripts\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.950872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568d0b2e-3e4c-493a-bbf3-68021c02efd2-logs\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.955246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-combined-ca-bundle\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.955987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.965766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-config-data\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.966110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-run-httpd\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.976900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-scripts\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.976900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-log-httpd\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.977336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.981239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.983092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-config-data\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.983351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.991258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjffz\" (UniqueName: \"kubernetes.io/projected/568d0b2e-3e4c-493a-bbf3-68021c02efd2-kube-api-access-cjffz\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.992020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " pod="openstack/watcher-api-0" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.994702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-db-sync-config-data\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:13 crc kubenswrapper[4858]: I0127 20:29:13.994777 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gc4mg"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.004503 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzcl\" (UniqueName: \"kubernetes.io/projected/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-kube-api-access-6fzcl\") pod \"ceilometer-0\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " pod="openstack/ceilometer-0" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.015729 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vr4w\" (UniqueName: \"kubernetes.io/projected/047f39f4-e397-46e4-a998-4bf8060a1114-kube-api-access-9vr4w\") pod \"barbican-db-sync-6n2n5\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.032898 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8498f5cc8c-dwdxx"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.054040 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-789b49c6fc-xkx87"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.055874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74707222-b7c2-4226-8df2-2459cb7d447c-logs\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.055959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74707222-b7c2-4226-8df2-2459cb7d447c-horizon-secret-key\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056009 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-scripts\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-combined-ca-bundle\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwn2j\" (UniqueName: \"kubernetes.io/projected/b7c7b1cd-a2a1-4bd2-a57c-715448327967-kube-api-access-gwn2j\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj25\" (UniqueName: \"kubernetes.io/projected/74707222-b7c2-4226-8df2-2459cb7d447c-kube-api-access-njj25\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-config-data\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-scripts\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c7b1cd-a2a1-4bd2-a57c-715448327967-logs\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.056277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-config-data\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.057543 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-config-data\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.059020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-scripts\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.060380 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c7b1cd-a2a1-4bd2-a57c-715448327967-logs\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.061050 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74707222-b7c2-4226-8df2-2459cb7d447c-logs\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.061747 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.073782 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.075619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-config-data\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.082209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-scripts\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.082238 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-combined-ca-bundle\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.084015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74707222-b7c2-4226-8df2-2459cb7d447c-horizon-secret-key\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.087819 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwn2j\" (UniqueName: \"kubernetes.io/projected/b7c7b1cd-a2a1-4bd2-a57c-715448327967-kube-api-access-gwn2j\") pod \"placement-db-sync-gc4mg\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.088575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj25\" (UniqueName: \"kubernetes.io/projected/74707222-b7c2-4226-8df2-2459cb7d447c-kube-api-access-njj25\") pod \"horizon-6544888b69-dvcr4\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.100796 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-789b49c6fc-xkx87"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.159359 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-sb\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.159483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-nb\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.159562 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwk2s\" (UniqueName: \"kubernetes.io/projected/8b883308-9933-4034-91e2-5562130c6f10-kube-api-access-pwk2s\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.159605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-swift-storage-0\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.159642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-config\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.159668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-svc\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.185662 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.242161 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.263927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-sb\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.264030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-nb\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.264072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwk2s\" (UniqueName: \"kubernetes.io/projected/8b883308-9933-4034-91e2-5562130c6f10-kube-api-access-pwk2s\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.264105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-swift-storage-0\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.264144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-config\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.264162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-svc\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.265149 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-svc\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.279632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-swift-storage-0\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.279713 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-sb\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.280463 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-nb\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.284100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.296241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-config\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.303766 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8498f5cc8c-dwdxx"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.307287 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.319019 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gb52w"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.323538 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gc4mg" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.349134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwk2s\" (UniqueName: \"kubernetes.io/projected/8b883308-9933-4034-91e2-5562130c6f10-kube-api-access-pwk2s\") pod \"dnsmasq-dns-789b49c6fc-xkx87\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.418634 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.532349 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.544182 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nsgb9"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.597228 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jlwch"] Jan 27 20:29:14 crc kubenswrapper[4858]: W0127 20:29:14.652343 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod734f1877_8907_44ff_b8af_c1a5f1b1395d.slice/crio-c78f7601ab628ca8e384b75dd9e3d8684471c329d85b7b566ed2d7ec16e108f3 WatchSource:0}: Error finding container c78f7601ab628ca8e384b75dd9e3d8684471c329d85b7b566ed2d7ec16e108f3: Status 404 returned error can't find the container with id c78f7601ab628ca8e384b75dd9e3d8684471c329d85b7b566ed2d7ec16e108f3 Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.807423 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-678cc97f57-w9dmc"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.971897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nsgb9" event={"ID":"8222b78c-e8de-4992-8c5b-bcf030d629ff","Type":"ContainerStarted","Data":"3fcf425070ff40665b2eee5bbd86f09a57f5085f491522ac01a4e53cb43bdc5a"} Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.974302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jlwch" event={"ID":"734f1877-8907-44ff-b8af-c1a5f1b1395d","Type":"ContainerStarted","Data":"c78f7601ab628ca8e384b75dd9e3d8684471c329d85b7b566ed2d7ec16e108f3"} Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.980604 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:14 crc kubenswrapper[4858]: I0127 20:29:14.984903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb52w" event={"ID":"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3","Type":"ContainerStarted","Data":"61fc3fccfc0093a310a38c433d232e4c1414b795deccee342e3004e4f3e83496"} Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.002115 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.006068 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678cc97f57-w9dmc" event={"ID":"0f3aa248-8818-4e60-9946-16d08aecd5ab","Type":"ContainerStarted","Data":"b2c79fbaa73acb1898d7bb08680b3e91764228da39b99f0f46f242df04951958"} Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.013837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b757c9de-8297-419d-9048-72cdf387c52d","Type":"ContainerStarted","Data":"77daa581872a0e4e3da92345f882117415bf7e77f987bc259d0df8dc91c5fb3a"} Jan 27 20:29:15 crc kubenswrapper[4858]: W0127 20:29:15.015638 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod568d0b2e_3e4c_493a_bbf3_68021c02efd2.slice/crio-9969c976e194c8f43b1dfb04866115fbbe6a6b1ceda582ef3485397e2f6e8265 WatchSource:0}: Error finding container 9969c976e194c8f43b1dfb04866115fbbe6a6b1ceda582ef3485397e2f6e8265: Status 404 returned error can't find the container with id 9969c976e194c8f43b1dfb04866115fbbe6a6b1ceda582ef3485397e2f6e8265 Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.028926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" event={"ID":"30594d2d-c274-480d-849d-a692b37ba029","Type":"ContainerStarted","Data":"f96f048d08d78b47232f67844202d79b5ba5d74c80ea87404d3c7bd3737496aa"} Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.202385 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.218642 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6n2n5"] Jan 27 20:29:15 crc kubenswrapper[4858]: W0127 20:29:15.245295 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod047f39f4_e397_46e4_a998_4bf8060a1114.slice/crio-14bbb9115745c4bd5d294ebaf2245b5823249659afe3741770d138b6054d39a5 WatchSource:0}: Error finding container 14bbb9115745c4bd5d294ebaf2245b5823249659afe3741770d138b6054d39a5: Status 404 returned error can't find the container with id 14bbb9115745c4bd5d294ebaf2245b5823249659afe3741770d138b6054d39a5 Jan 27 20:29:15 crc kubenswrapper[4858]: W0127 20:29:15.265651 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b795cea_c66d_4bca_8e9c_7da6cf08adf8.slice/crio-c8fceb6ab0163bb450e815aa6d43290c4a235f3fdfe58b2316e18554e03ae1ce WatchSource:0}: Error finding container c8fceb6ab0163bb450e815aa6d43290c4a235f3fdfe58b2316e18554e03ae1ce: Status 404 returned error can't find the container with id c8fceb6ab0163bb450e815aa6d43290c4a235f3fdfe58b2316e18554e03ae1ce Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.612862 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6544888b69-dvcr4"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.642085 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-789b49c6fc-xkx87"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.707215 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-678cc97f57-w9dmc"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.715386 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gc4mg"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.729334 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.770072 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.785647 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65769cc6c7-8z5vr"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.787527 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.809181 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65769cc6c7-8z5vr"] Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.896023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-config-data\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.896083 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-scripts\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.896206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phpg2\" (UniqueName: \"kubernetes.io/projected/59370c8c-d422-4992-a336-62b6d1e5f4d8-kube-api-access-phpg2\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.896239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59370c8c-d422-4992-a336-62b6d1e5f4d8-logs\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.896389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59370c8c-d422-4992-a336-62b6d1e5f4d8-horizon-secret-key\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.999811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-config-data\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.999856 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-scripts\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.999916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phpg2\" (UniqueName: \"kubernetes.io/projected/59370c8c-d422-4992-a336-62b6d1e5f4d8-kube-api-access-phpg2\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.999936 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59370c8c-d422-4992-a336-62b6d1e5f4d8-logs\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:15 crc kubenswrapper[4858]: I0127 20:29:15.999984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59370c8c-d422-4992-a336-62b6d1e5f4d8-horizon-secret-key\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.000710 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59370c8c-d422-4992-a336-62b6d1e5f4d8-logs\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.001290 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-scripts\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.001502 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-config-data\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.006060 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59370c8c-d422-4992-a336-62b6d1e5f4d8-horizon-secret-key\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.017364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phpg2\" (UniqueName: \"kubernetes.io/projected/59370c8c-d422-4992-a336-62b6d1e5f4d8-kube-api-access-phpg2\") pod \"horizon-65769cc6c7-8z5vr\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.126172 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.174376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gc4mg" event={"ID":"b7c7b1cd-a2a1-4bd2-a57c-715448327967","Type":"ContainerStarted","Data":"a6d3b7f7f1cd6ae1e0defc7d2ecd0c6c80234fa3527cbff9d93a2fa32b318c2a"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.219687 4858 generic.go:334] "Generic (PLEG): container finished" podID="30594d2d-c274-480d-849d-a692b37ba029" containerID="06c7a8fa00af075ada307d90ecba81baf795e0b5b538d8dd789094cc8039918f" exitCode=0 Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.220034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" event={"ID":"30594d2d-c274-480d-849d-a692b37ba029","Type":"ContainerDied","Data":"06c7a8fa00af075ada307d90ecba81baf795e0b5b538d8dd789094cc8039918f"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.244897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"568d0b2e-3e4c-493a-bbf3-68021c02efd2","Type":"ContainerStarted","Data":"2a8159974a3ff08e0867804f373ac7a4acc9e939a22541f1e5027651e720baea"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.244955 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"568d0b2e-3e4c-493a-bbf3-68021c02efd2","Type":"ContainerStarted","Data":"7121007f9877d6b3e443d32c7923e4f941e0dbb3d6370fd08813064214743d20"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.244968 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"568d0b2e-3e4c-493a-bbf3-68021c02efd2","Type":"ContainerStarted","Data":"9969c976e194c8f43b1dfb04866115fbbe6a6b1ceda582ef3485397e2f6e8265"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.245118 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api-log" containerID="cri-o://7121007f9877d6b3e443d32c7923e4f941e0dbb3d6370fd08813064214743d20" gracePeriod=30 Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.245614 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api" containerID="cri-o://2a8159974a3ff08e0867804f373ac7a4acc9e939a22541f1e5027651e720baea" gracePeriod=30 Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.245669 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.275396 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": EOF" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.283239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jlwch" event={"ID":"734f1877-8907-44ff-b8af-c1a5f1b1395d","Type":"ContainerStarted","Data":"fac4154d4462e64acb42205823e1c5870f70bcdc77728c1f41d181a934b9b634"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.296142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerStarted","Data":"c8fceb6ab0163bb450e815aa6d43290c4a235f3fdfe58b2316e18554e03ae1ce"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.334031 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb52w" event={"ID":"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3","Type":"ContainerStarted","Data":"9ece8c1246d94eb9b8bd55a0da4ce8ae246f164734bfe0a9de3c94ef0bd40bb6"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.356719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6n2n5" event={"ID":"047f39f4-e397-46e4-a998-4bf8060a1114","Type":"ContainerStarted","Data":"14bbb9115745c4bd5d294ebaf2245b5823249659afe3741770d138b6054d39a5"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.367031 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3278beeb-52a3-4351-92f1-839e98e59395","Type":"ContainerStarted","Data":"24a63ec25650d6fabcb0e669a7e8483f6be6bf8debb823a0c4c397c414b47ef0"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.371529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" event={"ID":"8b883308-9933-4034-91e2-5562130c6f10","Type":"ContainerStarted","Data":"43acb1747dd496ae396ed17824c3cfd46ac81cac217fa1bbf8d234639162781e"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.380594 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6544888b69-dvcr4" event={"ID":"74707222-b7c2-4226-8df2-2459cb7d447c","Type":"ContainerStarted","Data":"d22ee26562f6f5914b8bf2633b3e642784f89cb367cdda4fbd884ca946448538"} Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.792377 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gb52w" podStartSLOduration=4.792357425 podStartE2EDuration="4.792357425s" podCreationTimestamp="2026-01-27 20:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:16.59803193 +0000 UTC m=+1301.305847646" watchObservedRunningTime="2026-01-27 20:29:16.792357425 +0000 UTC m=+1301.500173121" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.820627 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.853583 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jlwch" podStartSLOduration=4.8534989379999995 podStartE2EDuration="4.853498938s" podCreationTimestamp="2026-01-27 20:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:16.690468621 +0000 UTC m=+1301.398284337" watchObservedRunningTime="2026-01-27 20:29:16.853498938 +0000 UTC m=+1301.561314644" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.895376 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.895352881 podStartE2EDuration="3.895352881s" podCreationTimestamp="2026-01-27 20:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:16.718383 +0000 UTC m=+1301.426198716" watchObservedRunningTime="2026-01-27 20:29:16.895352881 +0000 UTC m=+1301.603168587" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.947774 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-svc\") pod \"30594d2d-c274-480d-849d-a692b37ba029\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.947840 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxz7t\" (UniqueName: \"kubernetes.io/projected/30594d2d-c274-480d-849d-a692b37ba029-kube-api-access-mxz7t\") pod \"30594d2d-c274-480d-849d-a692b37ba029\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.947997 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-swift-storage-0\") pod \"30594d2d-c274-480d-849d-a692b37ba029\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.948046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-sb\") pod \"30594d2d-c274-480d-849d-a692b37ba029\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.948081 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-config\") pod \"30594d2d-c274-480d-849d-a692b37ba029\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.948126 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-nb\") pod \"30594d2d-c274-480d-849d-a692b37ba029\" (UID: \"30594d2d-c274-480d-849d-a692b37ba029\") " Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.964823 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30594d2d-c274-480d-849d-a692b37ba029-kube-api-access-mxz7t" (OuterVolumeSpecName: "kube-api-access-mxz7t") pod "30594d2d-c274-480d-849d-a692b37ba029" (UID: "30594d2d-c274-480d-849d-a692b37ba029"). InnerVolumeSpecName "kube-api-access-mxz7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.988564 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30594d2d-c274-480d-849d-a692b37ba029" (UID: "30594d2d-c274-480d-849d-a692b37ba029"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:16 crc kubenswrapper[4858]: I0127 20:29:16.997171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-config" (OuterVolumeSpecName: "config") pod "30594d2d-c274-480d-849d-a692b37ba029" (UID: "30594d2d-c274-480d-849d-a692b37ba029"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.003452 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30594d2d-c274-480d-849d-a692b37ba029" (UID: "30594d2d-c274-480d-849d-a692b37ba029"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.006351 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30594d2d-c274-480d-849d-a692b37ba029" (UID: "30594d2d-c274-480d-849d-a692b37ba029"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:17 crc kubenswrapper[4858]: W0127 20:29:17.007579 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59370c8c_d422_4992_a336_62b6d1e5f4d8.slice/crio-a360b0609ee20ea6456711bf4fb56d48bb425e8ece038dcb37fa700196ac0ed1 WatchSource:0}: Error finding container a360b0609ee20ea6456711bf4fb56d48bb425e8ece038dcb37fa700196ac0ed1: Status 404 returned error can't find the container with id a360b0609ee20ea6456711bf4fb56d48bb425e8ece038dcb37fa700196ac0ed1 Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.037393 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65769cc6c7-8z5vr"] Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.038676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "30594d2d-c274-480d-849d-a692b37ba029" (UID: "30594d2d-c274-480d-849d-a692b37ba029"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.050330 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.050366 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.050379 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.050389 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.050399 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30594d2d-c274-480d-849d-a692b37ba029-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.050408 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxz7t\" (UniqueName: \"kubernetes.io/projected/30594d2d-c274-480d-849d-a692b37ba029-kube-api-access-mxz7t\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.408573 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65769cc6c7-8z5vr" event={"ID":"59370c8c-d422-4992-a336-62b6d1e5f4d8","Type":"ContainerStarted","Data":"a360b0609ee20ea6456711bf4fb56d48bb425e8ece038dcb37fa700196ac0ed1"} Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.411917 4858 generic.go:334] "Generic (PLEG): container finished" podID="8b883308-9933-4034-91e2-5562130c6f10" containerID="f94d3937a45c7d6a9e37a94bc78c9b78d4da0996ce1944d482f737900795b362" exitCode=0 Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.412009 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" event={"ID":"8b883308-9933-4034-91e2-5562130c6f10","Type":"ContainerStarted","Data":"f36e73984958cfc9d6db231ecc55a91c7addac4daac8dcd6c320aa7606bd832b"} Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.412062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" event={"ID":"8b883308-9933-4034-91e2-5562130c6f10","Type":"ContainerDied","Data":"f94d3937a45c7d6a9e37a94bc78c9b78d4da0996ce1944d482f737900795b362"} Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.413565 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.427903 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.429459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8498f5cc8c-dwdxx" event={"ID":"30594d2d-c274-480d-849d-a692b37ba029","Type":"ContainerDied","Data":"f96f048d08d78b47232f67844202d79b5ba5d74c80ea87404d3c7bd3737496aa"} Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.429592 4858 scope.go:117] "RemoveContainer" containerID="06c7a8fa00af075ada307d90ecba81baf795e0b5b538d8dd789094cc8039918f" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.439313 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" podStartSLOduration=4.439289004 podStartE2EDuration="4.439289004s" podCreationTimestamp="2026-01-27 20:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:17.435111522 +0000 UTC m=+1302.142927238" watchObservedRunningTime="2026-01-27 20:29:17.439289004 +0000 UTC m=+1302.147104710" Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.446619 4858 generic.go:334] "Generic (PLEG): container finished" podID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerID="7121007f9877d6b3e443d32c7923e4f941e0dbb3d6370fd08813064214743d20" exitCode=143 Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.446652 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"568d0b2e-3e4c-493a-bbf3-68021c02efd2","Type":"ContainerDied","Data":"7121007f9877d6b3e443d32c7923e4f941e0dbb3d6370fd08813064214743d20"} Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.528847 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8498f5cc8c-dwdxx"] Jan 27 20:29:17 crc kubenswrapper[4858]: I0127 20:29:17.540373 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8498f5cc8c-dwdxx"] Jan 27 20:29:18 crc kubenswrapper[4858]: I0127 20:29:18.086244 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30594d2d-c274-480d-849d-a692b37ba029" path="/var/lib/kubelet/pods/30594d2d-c274-480d-849d-a692b37ba029/volumes" Jan 27 20:29:19 crc kubenswrapper[4858]: I0127 20:29:19.186610 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 20:29:20 crc kubenswrapper[4858]: I0127 20:29:20.413747 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": read tcp 10.217.0.2:48866->10.217.0.152:9322: read: connection reset by peer" Jan 27 20:29:21 crc kubenswrapper[4858]: I0127 20:29:21.496766 4858 generic.go:334] "Generic (PLEG): container finished" podID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerID="2a8159974a3ff08e0867804f373ac7a4acc9e939a22541f1e5027651e720baea" exitCode=0 Jan 27 20:29:21 crc kubenswrapper[4858]: I0127 20:29:21.496868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"568d0b2e-3e4c-493a-bbf3-68021c02efd2","Type":"ContainerDied","Data":"2a8159974a3ff08e0867804f373ac7a4acc9e939a22541f1e5027651e720baea"} Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.358576 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6544888b69-dvcr4"] Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.397652 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f7fd77bcb-cxmbt"] Jan 27 20:29:22 crc kubenswrapper[4858]: E0127 20:29:22.398162 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30594d2d-c274-480d-849d-a692b37ba029" containerName="init" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.398180 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30594d2d-c274-480d-849d-a692b37ba029" containerName="init" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.398383 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30594d2d-c274-480d-849d-a692b37ba029" containerName="init" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.399545 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.408346 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.415239 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f7fd77bcb-cxmbt"] Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.482869 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65769cc6c7-8z5vr"] Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.510751 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec05cb1-c40c-48cb-ba64-9321abb6287c-logs\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.510826 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-tls-certs\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.510860 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wjkm\" (UniqueName: \"kubernetes.io/projected/2ec05cb1-c40c-48cb-ba64-9321abb6287c-kube-api-access-4wjkm\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.510918 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-combined-ca-bundle\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.511004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-secret-key\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.511070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-config-data\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.511096 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-scripts\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.540988 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57556bc8bb-j4fhs"] Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.546693 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.562026 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57556bc8bb-j4fhs"] Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.612882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-secret-key\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.612964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-config-data\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.612992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-scripts\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.613029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec05cb1-c40c-48cb-ba64-9321abb6287c-logs\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.613056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-tls-certs\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.613079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wjkm\" (UniqueName: \"kubernetes.io/projected/2ec05cb1-c40c-48cb-ba64-9321abb6287c-kube-api-access-4wjkm\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.613121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-combined-ca-bundle\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.621617 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-scripts\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.628977 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-config-data\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.629334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec05cb1-c40c-48cb-ba64-9321abb6287c-logs\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.634493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-secret-key\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.646759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-tls-certs\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.666700 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-combined-ca-bundle\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.671212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wjkm\" (UniqueName: \"kubernetes.io/projected/2ec05cb1-c40c-48cb-ba64-9321abb6287c-kube-api-access-4wjkm\") pod \"horizon-5f7fd77bcb-cxmbt\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.715976 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/996129af-9ae9-44ca-b677-2c27bf71847d-logs\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.716164 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/996129af-9ae9-44ca-b677-2c27bf71847d-scripts\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.716194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-horizon-secret-key\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.716231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rgl5\" (UniqueName: \"kubernetes.io/projected/996129af-9ae9-44ca-b677-2c27bf71847d-kube-api-access-8rgl5\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.716328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-combined-ca-bundle\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.716350 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/996129af-9ae9-44ca-b677-2c27bf71847d-config-data\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.716391 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-horizon-tls-certs\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.745097 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.818525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/996129af-9ae9-44ca-b677-2c27bf71847d-scripts\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.818600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-horizon-secret-key\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.818638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rgl5\" (UniqueName: \"kubernetes.io/projected/996129af-9ae9-44ca-b677-2c27bf71847d-kube-api-access-8rgl5\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.818705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-combined-ca-bundle\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.818725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/996129af-9ae9-44ca-b677-2c27bf71847d-config-data\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.818750 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-horizon-tls-certs\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.818781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/996129af-9ae9-44ca-b677-2c27bf71847d-logs\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.819249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/996129af-9ae9-44ca-b677-2c27bf71847d-logs\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.819916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/996129af-9ae9-44ca-b677-2c27bf71847d-scripts\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.822824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/996129af-9ae9-44ca-b677-2c27bf71847d-config-data\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.834489 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-horizon-secret-key\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.836370 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-combined-ca-bundle\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.845323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/996129af-9ae9-44ca-b677-2c27bf71847d-horizon-tls-certs\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:22 crc kubenswrapper[4858]: I0127 20:29:22.882116 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rgl5\" (UniqueName: \"kubernetes.io/projected/996129af-9ae9-44ca-b677-2c27bf71847d-kube-api-access-8rgl5\") pod \"horizon-57556bc8bb-j4fhs\" (UID: \"996129af-9ae9-44ca-b677-2c27bf71847d\") " pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:23 crc kubenswrapper[4858]: I0127 20:29:23.170595 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:23 crc kubenswrapper[4858]: I0127 20:29:23.556629 4858 generic.go:334] "Generic (PLEG): container finished" podID="f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" containerID="9ece8c1246d94eb9b8bd55a0da4ce8ae246f164734bfe0a9de3c94ef0bd40bb6" exitCode=0 Jan 27 20:29:23 crc kubenswrapper[4858]: I0127 20:29:23.556688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb52w" event={"ID":"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3","Type":"ContainerDied","Data":"9ece8c1246d94eb9b8bd55a0da4ce8ae246f164734bfe0a9de3c94ef0bd40bb6"} Jan 27 20:29:24 crc kubenswrapper[4858]: I0127 20:29:24.421837 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:29:24 crc kubenswrapper[4858]: I0127 20:29:24.487201 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b485d48dc-srxmr"] Jan 27 20:29:24 crc kubenswrapper[4858]: I0127 20:29:24.487523 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" containerID="cri-o://828cd25448002f0411534faf9fb23020ccdc3c631c333c25285470b71670395b" gracePeriod=10 Jan 27 20:29:25 crc kubenswrapper[4858]: I0127 20:29:25.585888 4858 generic.go:334] "Generic (PLEG): container finished" podID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerID="828cd25448002f0411534faf9fb23020ccdc3c631c333c25285470b71670395b" exitCode=0 Jan 27 20:29:25 crc kubenswrapper[4858]: I0127 20:29:25.585956 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" event={"ID":"38c884ca-127a-4e48-a05a-bd1834beb22b","Type":"ContainerDied","Data":"828cd25448002f0411534faf9fb23020ccdc3c631c333c25285470b71670395b"} Jan 27 20:29:27 crc kubenswrapper[4858]: I0127 20:29:27.062351 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.376209 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.384865 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.457331 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-credential-keys\") pod \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.457807 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-combined-ca-bundle\") pod \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.457884 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568d0b2e-3e4c-493a-bbf3-68021c02efd2-logs\") pod \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.457916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-scripts\") pod \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.457953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ptbg\" (UniqueName: \"kubernetes.io/projected/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-kube-api-access-8ptbg\") pod \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.457999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-fernet-keys\") pod \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.458730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568d0b2e-3e4c-493a-bbf3-68021c02efd2-logs" (OuterVolumeSpecName: "logs") pod "568d0b2e-3e4c-493a-bbf3-68021c02efd2" (UID: "568d0b2e-3e4c-493a-bbf3-68021c02efd2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.459777 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-config-data\") pod \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\" (UID: \"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.459908 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjffz\" (UniqueName: \"kubernetes.io/projected/568d0b2e-3e4c-493a-bbf3-68021c02efd2-kube-api-access-cjffz\") pod \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.460013 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-combined-ca-bundle\") pod \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.460147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-config-data\") pod \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.460241 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-custom-prometheus-ca\") pod \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\" (UID: \"568d0b2e-3e4c-493a-bbf3-68021c02efd2\") " Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.460964 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/568d0b2e-3e4c-493a-bbf3-68021c02efd2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.465732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" (UID: "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.466999 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-kube-api-access-8ptbg" (OuterVolumeSpecName: "kube-api-access-8ptbg") pod "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" (UID: "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3"). InnerVolumeSpecName "kube-api-access-8ptbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.467373 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" (UID: "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.468195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-scripts" (OuterVolumeSpecName: "scripts") pod "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" (UID: "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.479411 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568d0b2e-3e4c-493a-bbf3-68021c02efd2-kube-api-access-cjffz" (OuterVolumeSpecName: "kube-api-access-cjffz") pod "568d0b2e-3e4c-493a-bbf3-68021c02efd2" (UID: "568d0b2e-3e4c-493a-bbf3-68021c02efd2"). InnerVolumeSpecName "kube-api-access-cjffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.506772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-config-data" (OuterVolumeSpecName: "config-data") pod "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" (UID: "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.512806 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "568d0b2e-3e4c-493a-bbf3-68021c02efd2" (UID: "568d0b2e-3e4c-493a-bbf3-68021c02efd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.518136 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" (UID: "f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.519038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "568d0b2e-3e4c-493a-bbf3-68021c02efd2" (UID: "568d0b2e-3e4c-493a-bbf3-68021c02efd2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.555566 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-config-data" (OuterVolumeSpecName: "config-data") pod "568d0b2e-3e4c-493a-bbf3-68021c02efd2" (UID: "568d0b2e-3e4c-493a-bbf3-68021c02efd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563220 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563264 4858 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563282 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563299 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563311 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563323 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ptbg\" (UniqueName: \"kubernetes.io/projected/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-kube-api-access-8ptbg\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563335 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563345 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563356 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjffz\" (UniqueName: \"kubernetes.io/projected/568d0b2e-3e4c-493a-bbf3-68021c02efd2-kube-api-access-cjffz\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.563367 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568d0b2e-3e4c-493a-bbf3-68021c02efd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.619694 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"568d0b2e-3e4c-493a-bbf3-68021c02efd2","Type":"ContainerDied","Data":"9969c976e194c8f43b1dfb04866115fbbe6a6b1ceda582ef3485397e2f6e8265"} Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.619738 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.619767 4858 scope.go:117] "RemoveContainer" containerID="2a8159974a3ff08e0867804f373ac7a4acc9e939a22541f1e5027651e720baea" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.622872 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb52w" event={"ID":"f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3","Type":"ContainerDied","Data":"61fc3fccfc0093a310a38c433d232e4c1414b795deccee342e3004e4f3e83496"} Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.622924 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61fc3fccfc0093a310a38c433d232e4c1414b795deccee342e3004e4f3e83496" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.622965 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb52w" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.669475 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.690848 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.700940 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:28 crc kubenswrapper[4858]: E0127 20:29:28.701449 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.701463 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api" Jan 27 20:29:28 crc kubenswrapper[4858]: E0127 20:29:28.701476 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api-log" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.701483 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api-log" Jan 27 20:29:28 crc kubenswrapper[4858]: E0127 20:29:28.701498 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" containerName="keystone-bootstrap" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.701505 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" containerName="keystone-bootstrap" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.701795 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.701815 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" containerName="keystone-bootstrap" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.701829 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api-log" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.703229 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.710979 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.721611 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.766903 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.766989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-logs\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.767011 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9ss\" (UniqueName: \"kubernetes.io/projected/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-kube-api-access-bj9ss\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.767115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.767175 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-config-data\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.869594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.869681 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-config-data\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.869780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.869879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-logs\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.869981 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9ss\" (UniqueName: \"kubernetes.io/projected/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-kube-api-access-bj9ss\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.870456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-logs\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.876144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.876344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.876396 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-config-data\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:28 crc kubenswrapper[4858]: I0127 20:29:28.887199 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9ss\" (UniqueName: \"kubernetes.io/projected/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-kube-api-access-bj9ss\") pod \"watcher-api-0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " pod="openstack/watcher-api-0" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.035504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.188083 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.152:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.328724 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.328810 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.505187 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gb52w"] Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.520428 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gb52w"] Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.604275 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ltx6z"] Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.605749 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.607793 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4vbp" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.608017 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.608961 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.609115 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.609159 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.621533 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ltx6z"] Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.688685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-scripts\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.688744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-fernet-keys\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.688771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-combined-ca-bundle\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.688796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-credential-keys\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.688834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-config-data\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.688895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bwv2\" (UniqueName: \"kubernetes.io/projected/8dae3012-914a-4fdc-81b0-23dc98627b05-kube-api-access-8bwv2\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.791287 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bwv2\" (UniqueName: \"kubernetes.io/projected/8dae3012-914a-4fdc-81b0-23dc98627b05-kube-api-access-8bwv2\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.791493 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-scripts\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.791593 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-fernet-keys\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.791633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-combined-ca-bundle\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.791714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-credential-keys\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.791798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-config-data\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.798910 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-fernet-keys\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.800122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-config-data\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.801891 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-credential-keys\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.808958 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-combined-ca-bundle\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.815479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bwv2\" (UniqueName: \"kubernetes.io/projected/8dae3012-914a-4fdc-81b0-23dc98627b05-kube-api-access-8bwv2\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.818953 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-scripts\") pod \"keystone-bootstrap-ltx6z\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:29 crc kubenswrapper[4858]: I0127 20:29:29.948699 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:30 crc kubenswrapper[4858]: I0127 20:29:30.082973 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568d0b2e-3e4c-493a-bbf3-68021c02efd2" path="/var/lib/kubelet/pods/568d0b2e-3e4c-493a-bbf3-68021c02efd2/volumes" Jan 27 20:29:30 crc kubenswrapper[4858]: I0127 20:29:30.083655 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3" path="/var/lib/kubelet/pods/f8b1fe7f-f3ca-4ca9-9cbf-5c0cbad5e2c3/volumes" Jan 27 20:29:35 crc kubenswrapper[4858]: I0127 20:29:35.715670 4858 generic.go:334] "Generic (PLEG): container finished" podID="e04ce574-5470-43ae-8207-fb01bd98805f" containerID="4ce9da19d3b4a24b22a6cbac46ac50f7eb358a3795641085303a4e9443f58bd5" exitCode=0 Jan 27 20:29:35 crc kubenswrapper[4858]: I0127 20:29:35.715786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kr2j4" event={"ID":"e04ce574-5470-43ae-8207-fb01bd98805f","Type":"ContainerDied","Data":"4ce9da19d3b4a24b22a6cbac46ac50f7eb358a3795641085303a4e9443f58bd5"} Jan 27 20:29:37 crc kubenswrapper[4858]: I0127 20:29:37.062112 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 27 20:29:40 crc kubenswrapper[4858]: E0127 20:29:40.449126 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 20:29:40 crc kubenswrapper[4858]: E0127 20:29:40.450000 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 27 20:29:40 crc kubenswrapper[4858]: E0127 20:29:40.450177 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.129.56.46:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59ch98hc5h77h6h55fh668hf7h65ch54bhd4h57hd6hfch678hc9h669h698h558h589h56ch5fh66h657h666h559h5bbh694hc5h598h9ch5dfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phpg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65769cc6c7-8z5vr_openstack(59370c8c-d422-4992-a336-62b6d1e5f4d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:29:40 crc kubenswrapper[4858]: E0127 20:29:40.458410 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-65769cc6c7-8z5vr" podUID="59370c8c-d422-4992-a336-62b6d1e5f4d8" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.063653 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.064430 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:29:42 crc kubenswrapper[4858]: E0127 20:29:42.074346 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 27 20:29:42 crc kubenswrapper[4858]: E0127 20:29:42.074415 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 27 20:29:42 crc kubenswrapper[4858]: E0127 20:29:42.074611 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.129.56.46:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwn2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-gc4mg_openstack(b7c7b1cd-a2a1-4bd2-a57c-715448327967): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:29:42 crc kubenswrapper[4858]: E0127 20:29:42.075802 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-gc4mg" podUID="b7c7b1cd-a2a1-4bd2-a57c-715448327967" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.218247 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.288721 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-sb\") pod \"38c884ca-127a-4e48-a05a-bd1834beb22b\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.288779 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-nb\") pod \"38c884ca-127a-4e48-a05a-bd1834beb22b\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.288848 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jzsj\" (UniqueName: \"kubernetes.io/projected/38c884ca-127a-4e48-a05a-bd1834beb22b-kube-api-access-7jzsj\") pod \"38c884ca-127a-4e48-a05a-bd1834beb22b\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.288932 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-swift-storage-0\") pod \"38c884ca-127a-4e48-a05a-bd1834beb22b\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.288971 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-config\") pod \"38c884ca-127a-4e48-a05a-bd1834beb22b\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.289055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-svc\") pod \"38c884ca-127a-4e48-a05a-bd1834beb22b\" (UID: \"38c884ca-127a-4e48-a05a-bd1834beb22b\") " Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.294961 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c884ca-127a-4e48-a05a-bd1834beb22b-kube-api-access-7jzsj" (OuterVolumeSpecName: "kube-api-access-7jzsj") pod "38c884ca-127a-4e48-a05a-bd1834beb22b" (UID: "38c884ca-127a-4e48-a05a-bd1834beb22b"). InnerVolumeSpecName "kube-api-access-7jzsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.341497 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38c884ca-127a-4e48-a05a-bd1834beb22b" (UID: "38c884ca-127a-4e48-a05a-bd1834beb22b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.348257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38c884ca-127a-4e48-a05a-bd1834beb22b" (UID: "38c884ca-127a-4e48-a05a-bd1834beb22b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.354705 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38c884ca-127a-4e48-a05a-bd1834beb22b" (UID: "38c884ca-127a-4e48-a05a-bd1834beb22b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.360764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-config" (OuterVolumeSpecName: "config") pod "38c884ca-127a-4e48-a05a-bd1834beb22b" (UID: "38c884ca-127a-4e48-a05a-bd1834beb22b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.367926 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38c884ca-127a-4e48-a05a-bd1834beb22b" (UID: "38c884ca-127a-4e48-a05a-bd1834beb22b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.391110 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.391153 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.391167 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jzsj\" (UniqueName: \"kubernetes.io/projected/38c884ca-127a-4e48-a05a-bd1834beb22b-kube-api-access-7jzsj\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.391180 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.391195 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.391205 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c884ca-127a-4e48-a05a-bd1834beb22b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.782319 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" event={"ID":"38c884ca-127a-4e48-a05a-bd1834beb22b","Type":"ContainerDied","Data":"4ee80fc1ba41f3e5c054119b17dc67fdbe597e2285dcdcd671cf9e99ca3b393e"} Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.782363 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" Jan 27 20:29:42 crc kubenswrapper[4858]: E0127 20:29:42.786053 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-gc4mg" podUID="b7c7b1cd-a2a1-4bd2-a57c-715448327967" Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.846146 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b485d48dc-srxmr"] Jan 27 20:29:42 crc kubenswrapper[4858]: I0127 20:29:42.856776 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b485d48dc-srxmr"] Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.282069 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.282141 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.282288 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.129.56.46:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdtxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nsgb9_openstack(8222b78c-e8de-4992-8c5b-bcf030d629ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.283568 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nsgb9" podUID="8222b78c-e8de-4992-8c5b-bcf030d629ff" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.355224 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kr2j4" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.426755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-combined-ca-bundle\") pod \"e04ce574-5470-43ae-8207-fb01bd98805f\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.426895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-db-sync-config-data\") pod \"e04ce574-5470-43ae-8207-fb01bd98805f\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.427074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9kf\" (UniqueName: \"kubernetes.io/projected/e04ce574-5470-43ae-8207-fb01bd98805f-kube-api-access-9v9kf\") pod \"e04ce574-5470-43ae-8207-fb01bd98805f\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.427285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-config-data\") pod \"e04ce574-5470-43ae-8207-fb01bd98805f\" (UID: \"e04ce574-5470-43ae-8207-fb01bd98805f\") " Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.430926 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e04ce574-5470-43ae-8207-fb01bd98805f" (UID: "e04ce574-5470-43ae-8207-fb01bd98805f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.432746 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04ce574-5470-43ae-8207-fb01bd98805f-kube-api-access-9v9kf" (OuterVolumeSpecName: "kube-api-access-9v9kf") pod "e04ce574-5470-43ae-8207-fb01bd98805f" (UID: "e04ce574-5470-43ae-8207-fb01bd98805f"). InnerVolumeSpecName "kube-api-access-9v9kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.458456 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e04ce574-5470-43ae-8207-fb01bd98805f" (UID: "e04ce574-5470-43ae-8207-fb01bd98805f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.479943 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-config-data" (OuterVolumeSpecName: "config-data") pod "e04ce574-5470-43ae-8207-fb01bd98805f" (UID: "e04ce574-5470-43ae-8207-fb01bd98805f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.530059 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9kf\" (UniqueName: \"kubernetes.io/projected/e04ce574-5470-43ae-8207-fb01bd98805f-kube-api-access-9v9kf\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.530102 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.530116 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.530129 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e04ce574-5470-43ae-8207-fb01bd98805f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.794292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kr2j4" event={"ID":"e04ce574-5470-43ae-8207-fb01bd98805f","Type":"ContainerDied","Data":"71b9aac5bebb8d78b029d6fbddb3b4f1e06ecfb2be44c838ad0b09776729313c"} Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.794341 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b9aac5bebb8d78b029d6fbddb3b4f1e06ecfb2be44c838ad0b09776729313c" Jan 27 20:29:43 crc kubenswrapper[4858]: I0127 20:29:43.794387 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kr2j4" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.796637 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-nsgb9" podUID="8222b78c-e8de-4992-8c5b-bcf030d629ff" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.844002 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.844082 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.844356 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.129.56.46:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vr4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6n2n5_openstack(047f39f4-e397-46e4-a998-4bf8060a1114): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:29:43 crc kubenswrapper[4858]: E0127 20:29:43.845591 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6n2n5" podUID="047f39f4-e397-46e4-a998-4bf8060a1114" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.082920 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" path="/var/lib/kubelet/pods/38c884ca-127a-4e48-a05a-bd1834beb22b/volumes" Jan 27 20:29:44 crc kubenswrapper[4858]: E0127 20:29:44.220360 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 27 20:29:44 crc kubenswrapper[4858]: E0127 20:29:44.220422 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.46:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 27 20:29:44 crc kubenswrapper[4858]: E0127 20:29:44.220651 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.129.56.46:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n89hbbhb9h5dbh6h575hb4h579h99h598h674h64bh59hd6h549hf6h5cch64ch59bhf5h647h6bh584h646h5b8h5d5h8dhb7h68bh56dh569h5cbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6fzcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7b795cea-c66d-4bca-8e9c-7da6cf08adf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.269374 4858 scope.go:117] "RemoveContainer" containerID="7121007f9877d6b3e443d32c7923e4f941e0dbb3d6370fd08813064214743d20" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.438775 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.501947 4858 scope.go:117] "RemoveContainer" containerID="828cd25448002f0411534faf9fb23020ccdc3c631c333c25285470b71670395b" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.571736 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phpg2\" (UniqueName: \"kubernetes.io/projected/59370c8c-d422-4992-a336-62b6d1e5f4d8-kube-api-access-phpg2\") pod \"59370c8c-d422-4992-a336-62b6d1e5f4d8\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.571792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59370c8c-d422-4992-a336-62b6d1e5f4d8-logs\") pod \"59370c8c-d422-4992-a336-62b6d1e5f4d8\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.571811 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59370c8c-d422-4992-a336-62b6d1e5f4d8-horizon-secret-key\") pod \"59370c8c-d422-4992-a336-62b6d1e5f4d8\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.571896 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-scripts\") pod \"59370c8c-d422-4992-a336-62b6d1e5f4d8\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.571982 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-config-data\") pod \"59370c8c-d422-4992-a336-62b6d1e5f4d8\" (UID: \"59370c8c-d422-4992-a336-62b6d1e5f4d8\") " Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.573476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-config-data" (OuterVolumeSpecName: "config-data") pod "59370c8c-d422-4992-a336-62b6d1e5f4d8" (UID: "59370c8c-d422-4992-a336-62b6d1e5f4d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.574640 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59370c8c-d422-4992-a336-62b6d1e5f4d8-logs" (OuterVolumeSpecName: "logs") pod "59370c8c-d422-4992-a336-62b6d1e5f4d8" (UID: "59370c8c-d422-4992-a336-62b6d1e5f4d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.575083 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-scripts" (OuterVolumeSpecName: "scripts") pod "59370c8c-d422-4992-a336-62b6d1e5f4d8" (UID: "59370c8c-d422-4992-a336-62b6d1e5f4d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.591768 4858 scope.go:117] "RemoveContainer" containerID="e486f805a42fa728f387e0a3665b5b02c4d75eb1b3d531ddc088f9d1db72f8f9" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.591890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59370c8c-d422-4992-a336-62b6d1e5f4d8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "59370c8c-d422-4992-a336-62b6d1e5f4d8" (UID: "59370c8c-d422-4992-a336-62b6d1e5f4d8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.592044 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59370c8c-d422-4992-a336-62b6d1e5f4d8-kube-api-access-phpg2" (OuterVolumeSpecName: "kube-api-access-phpg2") pod "59370c8c-d422-4992-a336-62b6d1e5f4d8" (UID: "59370c8c-d422-4992-a336-62b6d1e5f4d8"). InnerVolumeSpecName "kube-api-access-phpg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.673869 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/59370c8c-d422-4992-a336-62b6d1e5f4d8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.674357 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.674368 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59370c8c-d422-4992-a336-62b6d1e5f4d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.674379 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phpg2\" (UniqueName: \"kubernetes.io/projected/59370c8c-d422-4992-a336-62b6d1e5f4d8-kube-api-access-phpg2\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.674395 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59370c8c-d422-4992-a336-62b6d1e5f4d8-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.752351 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9d9cb9f97-shwkc"] Jan 27 20:29:44 crc kubenswrapper[4858]: E0127 20:29:44.752855 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="init" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.752867 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="init" Jan 27 20:29:44 crc kubenswrapper[4858]: E0127 20:29:44.752897 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04ce574-5470-43ae-8207-fb01bd98805f" containerName="glance-db-sync" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.752903 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04ce574-5470-43ae-8207-fb01bd98805f" containerName="glance-db-sync" Jan 27 20:29:44 crc kubenswrapper[4858]: E0127 20:29:44.752910 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.752916 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.753080 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.753106 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04ce574-5470-43ae-8207-fb01bd98805f" containerName="glance-db-sync" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.754440 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.775737 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d9cb9f97-shwkc"] Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.830942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65769cc6c7-8z5vr" event={"ID":"59370c8c-d422-4992-a336-62b6d1e5f4d8","Type":"ContainerDied","Data":"a360b0609ee20ea6456711bf4fb56d48bb425e8ece038dcb37fa700196ac0ed1"} Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.831042 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65769cc6c7-8z5vr" Jan 27 20:29:44 crc kubenswrapper[4858]: E0127 20:29:44.845900 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.46:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-6n2n5" podUID="047f39f4-e397-46e4-a998-4bf8060a1114" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.879045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-swift-storage-0\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.879096 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-svc\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.879145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-sb\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.879204 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-config\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.879232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-nb\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.879296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5rz2\" (UniqueName: \"kubernetes.io/projected/38968ed0-a4a3-49df-b3b1-9816c0b77497-kube-api-access-x5rz2\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.914087 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65769cc6c7-8z5vr"] Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.923199 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65769cc6c7-8z5vr"] Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.986424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-nb\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.986819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5rz2\" (UniqueName: \"kubernetes.io/projected/38968ed0-a4a3-49df-b3b1-9816c0b77497-kube-api-access-x5rz2\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.986902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-swift-storage-0\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.986928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-svc\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.987029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-sb\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.987151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-config\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.987301 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-nb\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.987884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-svc\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.988644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-sb\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.988971 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-swift-storage-0\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:44 crc kubenswrapper[4858]: I0127 20:29:44.991430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-config\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.013841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5rz2\" (UniqueName: \"kubernetes.io/projected/38968ed0-a4a3-49df-b3b1-9816c0b77497-kube-api-access-x5rz2\") pod \"dnsmasq-dns-9d9cb9f97-shwkc\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.081459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.109698 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ltx6z"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.124033 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57556bc8bb-j4fhs"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.183460 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.217743 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f7fd77bcb-cxmbt"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.241120 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.771364 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.774623 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.783892 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xtrkf" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.784114 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.784150 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 20:29:45 crc kubenswrapper[4858]: W0127 20:29:45.797191 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38968ed0_a4a3_49df_b3b1_9816c0b77497.slice/crio-11759ddff94a513304a9f1780452895b94ec1f40b668353520a0536f7e82dbc5 WatchSource:0}: Error finding container 11759ddff94a513304a9f1780452895b94ec1f40b668353520a0536f7e82dbc5: Status 404 returned error can't find the container with id 11759ddff94a513304a9f1780452895b94ec1f40b668353520a0536f7e82dbc5 Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.824258 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d9cb9f97-shwkc"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.866881 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.869162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3278beeb-52a3-4351-92f1-839e98e59395","Type":"ContainerStarted","Data":"1b263c9acdcec53a2477ac6ed780357fdbabc43bb0d855f08aef1df2722ee54e"} Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.875730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57556bc8bb-j4fhs" event={"ID":"996129af-9ae9-44ca-b677-2c27bf71847d","Type":"ContainerStarted","Data":"11d47f7c2f0bf0b3a2b0d0e6e72957600b187b0b35e061c8999cf0c3423622b4"} Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.908287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b757c9de-8297-419d-9048-72cdf387c52d","Type":"ContainerStarted","Data":"5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d"} Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.939245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltx6z" event={"ID":"8dae3012-914a-4fdc-81b0-23dc98627b05","Type":"ContainerStarted","Data":"4b5673672fab895a02130bf3f064b895bcb5f15eb96c97f67a0aebc297f06be7"} Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.943530 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.943704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.943750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.943770 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.943835 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjkx9\" (UniqueName: \"kubernetes.io/projected/2f66fea8-f112-476f-809c-dfb782625728-kube-api-access-xjkx9\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.943872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.943914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-logs\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.964910 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.977038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.977165 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a40095d8-0b5f-4fc9-a4e6-776a899d41e0","Type":"ContainerStarted","Data":"27afcf0b2a2eb19d6d712310686a42187b444e5a7cad29d229e3bb0b2eff74ab"} Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.977214 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.981305 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.984097 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=5.977197917 podStartE2EDuration="32.9840727s" podCreationTimestamp="2026-01-27 20:29:13 +0000 UTC" firstStartedPulling="2026-01-27 20:29:15.081181017 +0000 UTC m=+1299.788996723" lastFinishedPulling="2026-01-27 20:29:42.08805578 +0000 UTC m=+1326.795871506" observedRunningTime="2026-01-27 20:29:45.888257491 +0000 UTC m=+1330.596073197" watchObservedRunningTime="2026-01-27 20:29:45.9840727 +0000 UTC m=+1330.691888406" Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.985339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7fd77bcb-cxmbt" event={"ID":"2ec05cb1-c40c-48cb-ba64-9321abb6287c","Type":"ContainerStarted","Data":"f70a38889bf4976d6a2973d48baab3f476e3c806a93b7aa505848883883d40cc"} Jan 27 20:29:45 crc kubenswrapper[4858]: I0127 20:29:45.990677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" event={"ID":"38968ed0-a4a3-49df-b3b1-9816c0b77497","Type":"ContainerStarted","Data":"11759ddff94a513304a9f1780452895b94ec1f40b668353520a0536f7e82dbc5"} Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.005429 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=5.408560973 podStartE2EDuration="34.005404888s" podCreationTimestamp="2026-01-27 20:29:12 +0000 UTC" firstStartedPulling="2026-01-27 20:29:14.613111125 +0000 UTC m=+1299.320926831" lastFinishedPulling="2026-01-27 20:29:43.209955 +0000 UTC m=+1327.917770746" observedRunningTime="2026-01-27 20:29:45.927737636 +0000 UTC m=+1330.635553362" watchObservedRunningTime="2026-01-27 20:29:46.005404888 +0000 UTC m=+1330.713220594" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.010829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6544888b69-dvcr4" event={"ID":"74707222-b7c2-4226-8df2-2459cb7d447c","Type":"ContainerStarted","Data":"7fd1c7a01ca4fad5ce789f8d407a634918a206fe96e96938159bc3e46f13b444"} Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.011244 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6544888b69-dvcr4" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon-log" containerID="cri-o://7fd1c7a01ca4fad5ce789f8d407a634918a206fe96e96938159bc3e46f13b444" gracePeriod=30 Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.011901 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6544888b69-dvcr4" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon" containerID="cri-o://a8d466b20daa4313c2971fa338639a04797f99e64c8b339dae34521368fce161" gracePeriod=30 Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.024137 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678cc97f57-w9dmc" event={"ID":"0f3aa248-8818-4e60-9946-16d08aecd5ab","Type":"ContainerStarted","Data":"83cd37394ce9dab3cd56fcdd3d1478d30a2871b2408745e4596d1247af1202eb"} Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.032965 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6544888b69-dvcr4" podStartSLOduration=4.377657176 podStartE2EDuration="33.032943947s" podCreationTimestamp="2026-01-27 20:29:13 +0000 UTC" firstStartedPulling="2026-01-27 20:29:15.647641332 +0000 UTC m=+1300.355457038" lastFinishedPulling="2026-01-27 20:29:44.302928103 +0000 UTC m=+1329.010743809" observedRunningTime="2026-01-27 20:29:46.032543265 +0000 UTC m=+1330.740358981" watchObservedRunningTime="2026-01-27 20:29:46.032943947 +0000 UTC m=+1330.740759653" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045272 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045359 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjkx9\" (UniqueName: \"kubernetes.io/projected/2f66fea8-f112-476f-809c-dfb782625728-kube-api-access-xjkx9\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045384 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-logs\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.045867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-logs\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.049484 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.053195 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.059518 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.063306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.063528 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.069172 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjkx9\" (UniqueName: \"kubernetes.io/projected/2f66fea8-f112-476f-809c-dfb782625728-kube-api-access-xjkx9\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.089695 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59370c8c-d422-4992-a336-62b6d1e5f4d8" path="/var/lib/kubelet/pods/59370c8c-d422-4992-a336-62b6d1e5f4d8/volumes" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.105043 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.118822 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.148902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-logs\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.148973 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnbb\" (UniqueName: \"kubernetes.io/projected/00d53ee1-62ed-4848-88bb-40a58313340f-kube-api-access-2lnbb\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.149036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.149059 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.149146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.149814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.149887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.251733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.252122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.252185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-logs\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.252212 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnbb\" (UniqueName: \"kubernetes.io/projected/00d53ee1-62ed-4848-88bb-40a58313340f-kube-api-access-2lnbb\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.252244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.252259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.252356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.256641 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.256904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.257114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-logs\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.260768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.265203 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.273325 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.287439 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnbb\" (UniqueName: \"kubernetes.io/projected/00d53ee1-62ed-4848-88bb-40a58313340f-kube-api-access-2lnbb\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.342818 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:46 crc kubenswrapper[4858]: I0127 20:29:46.606715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.047237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678cc97f57-w9dmc" event={"ID":"0f3aa248-8818-4e60-9946-16d08aecd5ab","Type":"ContainerStarted","Data":"5c7424910c6edc4b7b3f20ed691ec2f741c8afc03400ef2e786ac2a9126ea152"} Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.047924 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-678cc97f57-w9dmc" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon-log" containerID="cri-o://83cd37394ce9dab3cd56fcdd3d1478d30a2871b2408745e4596d1247af1202eb" gracePeriod=30 Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.048449 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-678cc97f57-w9dmc" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon" containerID="cri-o://5c7424910c6edc4b7b3f20ed691ec2f741c8afc03400ef2e786ac2a9126ea152" gracePeriod=30 Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.059535 4858 generic.go:334] "Generic (PLEG): container finished" podID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerID="9fc44f526653b3ece0070dd646f1b4f781b5a067fc44159f3bc166c573c1a1bb" exitCode=0 Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.059636 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" event={"ID":"38968ed0-a4a3-49df-b3b1-9816c0b77497","Type":"ContainerDied","Data":"9fc44f526653b3ece0070dd646f1b4f781b5a067fc44159f3bc166c573c1a1bb"} Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.064522 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b485d48dc-srxmr" podUID="38c884ca-127a-4e48-a05a-bd1834beb22b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.067359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57556bc8bb-j4fhs" event={"ID":"996129af-9ae9-44ca-b677-2c27bf71847d","Type":"ContainerStarted","Data":"c4e19711032c114fb0f1376966a9f8405d9a77c047e1c93151a9200febf5affc"} Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.069158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6544888b69-dvcr4" event={"ID":"74707222-b7c2-4226-8df2-2459cb7d447c","Type":"ContainerStarted","Data":"a8d466b20daa4313c2971fa338639a04797f99e64c8b339dae34521368fce161"} Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.080790 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltx6z" event={"ID":"8dae3012-914a-4fdc-81b0-23dc98627b05","Type":"ContainerStarted","Data":"c242687079d8393bf6b627b57754e87a8068b99262fd51b668befa18f96d68b9"} Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.109388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a40095d8-0b5f-4fc9-a4e6-776a899d41e0","Type":"ContainerStarted","Data":"feaba0008f23f1e7aee49e3e0f41aa88c51ba0b941c77f6182e459112d8408b9"} Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.113239 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-678cc97f57-w9dmc" podStartSLOduration=5.115198532 podStartE2EDuration="34.11321274s" podCreationTimestamp="2026-01-27 20:29:13 +0000 UTC" firstStartedPulling="2026-01-27 20:29:14.832210948 +0000 UTC m=+1299.540026654" lastFinishedPulling="2026-01-27 20:29:43.830225156 +0000 UTC m=+1328.538040862" observedRunningTime="2026-01-27 20:29:47.079932945 +0000 UTC m=+1331.787748651" watchObservedRunningTime="2026-01-27 20:29:47.11321274 +0000 UTC m=+1331.821028446" Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.117991 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7fd77bcb-cxmbt" event={"ID":"2ec05cb1-c40c-48cb-ba64-9321abb6287c","Type":"ContainerStarted","Data":"51a7810e6ed3102dd208860bde7beb41d43fac91b3815b7ecdc22e5d766e5ed9"} Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.415488 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ltx6z" podStartSLOduration=18.415457534 podStartE2EDuration="18.415457534s" podCreationTimestamp="2026-01-27 20:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:47.127980568 +0000 UTC m=+1331.835796284" watchObservedRunningTime="2026-01-27 20:29:47.415457534 +0000 UTC m=+1332.123273240" Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.418713 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:47 crc kubenswrapper[4858]: I0127 20:29:47.531857 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:47 crc kubenswrapper[4858]: W0127 20:29:47.546805 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00d53ee1_62ed_4848_88bb_40a58313340f.slice/crio-dcbe0da67cbf844a193b0ab33231bc22cd9091d23152bcca1529aea3433e0e19 WatchSource:0}: Error finding container dcbe0da67cbf844a193b0ab33231bc22cd9091d23152bcca1529aea3433e0e19: Status 404 returned error can't find the container with id dcbe0da67cbf844a193b0ab33231bc22cd9091d23152bcca1529aea3433e0e19 Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.161725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" event={"ID":"38968ed0-a4a3-49df-b3b1-9816c0b77497","Type":"ContainerStarted","Data":"4a2d3c0b69d2803c548a955731080f645cb9ddf696bba50c21cd6fa56a3d4f68"} Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.163946 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.180419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerStarted","Data":"aa84b43dd39168f5465057da4ffc0cf125da3e976c1b56bc5fb7f19c3ad83c36"} Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.201355 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" podStartSLOduration=4.201329881 podStartE2EDuration="4.201329881s" podCreationTimestamp="2026-01-27 20:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:48.19164658 +0000 UTC m=+1332.899462276" watchObservedRunningTime="2026-01-27 20:29:48.201329881 +0000 UTC m=+1332.909145597" Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.203365 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57556bc8bb-j4fhs" event={"ID":"996129af-9ae9-44ca-b677-2c27bf71847d","Type":"ContainerStarted","Data":"8bd1231f9145a2482630ec07dd1d63456bae21a67717e670d96d0e8d6cfc46cc"} Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.213430 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00d53ee1-62ed-4848-88bb-40a58313340f","Type":"ContainerStarted","Data":"dcbe0da67cbf844a193b0ab33231bc22cd9091d23152bcca1529aea3433e0e19"} Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.217173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a40095d8-0b5f-4fc9-a4e6-776a899d41e0","Type":"ContainerStarted","Data":"c6d284b1a3bea0cf002332c36984d2ec019deb16b0466ac5b771dc9aff758b76"} Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.218435 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.222010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f66fea8-f112-476f-809c-dfb782625728","Type":"ContainerStarted","Data":"3d326aebb39ac32c0ef60a14ceb600a261f6a14a42b6424b1c63eb9a1b453f8e"} Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.252623 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57556bc8bb-j4fhs" podStartSLOduration=26.252595307 podStartE2EDuration="26.252595307s" podCreationTimestamp="2026-01-27 20:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:48.231738623 +0000 UTC m=+1332.939554329" watchObservedRunningTime="2026-01-27 20:29:48.252595307 +0000 UTC m=+1332.960411013" Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.266470 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7fd77bcb-cxmbt" event={"ID":"2ec05cb1-c40c-48cb-ba64-9321abb6287c","Type":"ContainerStarted","Data":"1fb5d262371a89abeed10a1670e4080ebaeb89f0f9b926b587ffc3cf13b2dccc"} Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.285770 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=20.285748649 podStartE2EDuration="20.285748649s" podCreationTimestamp="2026-01-27 20:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:48.272598628 +0000 UTC m=+1332.980414364" watchObservedRunningTime="2026-01-27 20:29:48.285748649 +0000 UTC m=+1332.993564355" Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.312314 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f7fd77bcb-cxmbt" podStartSLOduration=26.312289178 podStartE2EDuration="26.312289178s" podCreationTimestamp="2026-01-27 20:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:48.302708321 +0000 UTC m=+1333.010524047" watchObservedRunningTime="2026-01-27 20:29:48.312289178 +0000 UTC m=+1333.020104884" Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.370735 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.561192 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:48 crc kubenswrapper[4858]: I0127 20:29:48.676533 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:49 crc kubenswrapper[4858]: I0127 20:29:49.036702 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 20:29:49 crc kubenswrapper[4858]: I0127 20:29:49.038030 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 27 20:29:49 crc kubenswrapper[4858]: I0127 20:29:49.306643 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00d53ee1-62ed-4848-88bb-40a58313340f","Type":"ContainerStarted","Data":"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a"} Jan 27 20:29:49 crc kubenswrapper[4858]: I0127 20:29:49.309035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f66fea8-f112-476f-809c-dfb782625728","Type":"ContainerStarted","Data":"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3"} Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.079899 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.322946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f66fea8-f112-476f-809c-dfb782625728","Type":"ContainerStarted","Data":"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e"} Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.323162 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-log" containerID="cri-o://06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3" gracePeriod=30 Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.323956 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-httpd" containerID="cri-o://d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e" gracePeriod=30 Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.330342 4858 generic.go:334] "Generic (PLEG): container finished" podID="734f1877-8907-44ff-b8af-c1a5f1b1395d" containerID="fac4154d4462e64acb42205823e1c5870f70bcdc77728c1f41d181a934b9b634" exitCode=0 Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.330425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jlwch" event={"ID":"734f1877-8907-44ff-b8af-c1a5f1b1395d","Type":"ContainerDied","Data":"fac4154d4462e64acb42205823e1c5870f70bcdc77728c1f41d181a934b9b634"} Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.337173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00d53ee1-62ed-4848-88bb-40a58313340f","Type":"ContainerStarted","Data":"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220"} Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.337222 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.337345 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-log" containerID="cri-o://37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a" gracePeriod=30 Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.337402 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-httpd" containerID="cri-o://1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220" gracePeriod=30 Jan 27 20:29:50 crc kubenswrapper[4858]: I0127 20:29:50.353107 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.353078643 podStartE2EDuration="6.353078643s" podCreationTimestamp="2026-01-27 20:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:50.350141667 +0000 UTC m=+1335.057957383" watchObservedRunningTime="2026-01-27 20:29:50.353078643 +0000 UTC m=+1335.060894349" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.177462 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.272283 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.301520 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-logs\") pod \"00d53ee1-62ed-4848-88bb-40a58313340f\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.301673 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-httpd-run\") pod \"00d53ee1-62ed-4848-88bb-40a58313340f\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.301715 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-scripts\") pod \"00d53ee1-62ed-4848-88bb-40a58313340f\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.301869 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-config-data\") pod \"00d53ee1-62ed-4848-88bb-40a58313340f\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.301935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lnbb\" (UniqueName: \"kubernetes.io/projected/00d53ee1-62ed-4848-88bb-40a58313340f-kube-api-access-2lnbb\") pod \"00d53ee1-62ed-4848-88bb-40a58313340f\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.302029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-combined-ca-bundle\") pod \"00d53ee1-62ed-4848-88bb-40a58313340f\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.302243 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "00d53ee1-62ed-4848-88bb-40a58313340f" (UID: "00d53ee1-62ed-4848-88bb-40a58313340f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.302297 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-logs" (OuterVolumeSpecName: "logs") pod "00d53ee1-62ed-4848-88bb-40a58313340f" (UID: "00d53ee1-62ed-4848-88bb-40a58313340f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.302323 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"00d53ee1-62ed-4848-88bb-40a58313340f\" (UID: \"00d53ee1-62ed-4848-88bb-40a58313340f\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.303277 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.303297 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00d53ee1-62ed-4848-88bb-40a58313340f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.314515 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "00d53ee1-62ed-4848-88bb-40a58313340f" (UID: "00d53ee1-62ed-4848-88bb-40a58313340f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.315619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-scripts" (OuterVolumeSpecName: "scripts") pod "00d53ee1-62ed-4848-88bb-40a58313340f" (UID: "00d53ee1-62ed-4848-88bb-40a58313340f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.319228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d53ee1-62ed-4848-88bb-40a58313340f-kube-api-access-2lnbb" (OuterVolumeSpecName: "kube-api-access-2lnbb") pod "00d53ee1-62ed-4848-88bb-40a58313340f" (UID: "00d53ee1-62ed-4848-88bb-40a58313340f"). InnerVolumeSpecName "kube-api-access-2lnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.342793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00d53ee1-62ed-4848-88bb-40a58313340f" (UID: "00d53ee1-62ed-4848-88bb-40a58313340f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.358410 4858 generic.go:334] "Generic (PLEG): container finished" podID="00d53ee1-62ed-4848-88bb-40a58313340f" containerID="1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220" exitCode=0 Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.358445 4858 generic.go:334] "Generic (PLEG): container finished" podID="00d53ee1-62ed-4848-88bb-40a58313340f" containerID="37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a" exitCode=143 Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.358510 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00d53ee1-62ed-4848-88bb-40a58313340f","Type":"ContainerDied","Data":"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220"} Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.358541 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00d53ee1-62ed-4848-88bb-40a58313340f","Type":"ContainerDied","Data":"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a"} Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.358563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"00d53ee1-62ed-4848-88bb-40a58313340f","Type":"ContainerDied","Data":"dcbe0da67cbf844a193b0ab33231bc22cd9091d23152bcca1529aea3433e0e19"} Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.358579 4858 scope.go:117] "RemoveContainer" containerID="1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.359002 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.376254 4858 generic.go:334] "Generic (PLEG): container finished" podID="2f66fea8-f112-476f-809c-dfb782625728" containerID="d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e" exitCode=0 Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.376295 4858 generic.go:334] "Generic (PLEG): container finished" podID="2f66fea8-f112-476f-809c-dfb782625728" containerID="06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3" exitCode=143 Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.376531 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.377516 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f66fea8-f112-476f-809c-dfb782625728","Type":"ContainerDied","Data":"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e"} Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.377569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f66fea8-f112-476f-809c-dfb782625728","Type":"ContainerDied","Data":"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3"} Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.377583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f66fea8-f112-476f-809c-dfb782625728","Type":"ContainerDied","Data":"3d326aebb39ac32c0ef60a14ceb600a261f6a14a42b6424b1c63eb9a1b453f8e"} Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.394836 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-config-data" (OuterVolumeSpecName: "config-data") pod "00d53ee1-62ed-4848-88bb-40a58313340f" (UID: "00d53ee1-62ed-4848-88bb-40a58313340f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.404476 4858 scope.go:117] "RemoveContainer" containerID="37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.404778 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-scripts\") pod \"2f66fea8-f112-476f-809c-dfb782625728\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.404854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-combined-ca-bundle\") pod \"2f66fea8-f112-476f-809c-dfb782625728\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405009 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-httpd-run\") pod \"2f66fea8-f112-476f-809c-dfb782625728\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-config-data\") pod \"2f66fea8-f112-476f-809c-dfb782625728\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405091 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2f66fea8-f112-476f-809c-dfb782625728\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405119 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjkx9\" (UniqueName: \"kubernetes.io/projected/2f66fea8-f112-476f-809c-dfb782625728-kube-api-access-xjkx9\") pod \"2f66fea8-f112-476f-809c-dfb782625728\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405140 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-logs\") pod \"2f66fea8-f112-476f-809c-dfb782625728\" (UID: \"2f66fea8-f112-476f-809c-dfb782625728\") " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405681 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405706 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405719 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lnbb\" (UniqueName: \"kubernetes.io/projected/00d53ee1-62ed-4848-88bb-40a58313340f-kube-api-access-2lnbb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405732 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00d53ee1-62ed-4848-88bb-40a58313340f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405762 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.405890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2f66fea8-f112-476f-809c-dfb782625728" (UID: "2f66fea8-f112-476f-809c-dfb782625728"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.410835 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-logs" (OuterVolumeSpecName: "logs") pod "2f66fea8-f112-476f-809c-dfb782625728" (UID: "2f66fea8-f112-476f-809c-dfb782625728"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.414095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "2f66fea8-f112-476f-809c-dfb782625728" (UID: "2f66fea8-f112-476f-809c-dfb782625728"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.419000 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-scripts" (OuterVolumeSpecName: "scripts") pod "2f66fea8-f112-476f-809c-dfb782625728" (UID: "2f66fea8-f112-476f-809c-dfb782625728"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.421377 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f66fea8-f112-476f-809c-dfb782625728-kube-api-access-xjkx9" (OuterVolumeSpecName: "kube-api-access-xjkx9") pod "2f66fea8-f112-476f-809c-dfb782625728" (UID: "2f66fea8-f112-476f-809c-dfb782625728"). InnerVolumeSpecName "kube-api-access-xjkx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.438130 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.448505 4858 scope.go:117] "RemoveContainer" containerID="1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220" Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.449587 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220\": container with ID starting with 1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220 not found: ID does not exist" containerID="1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.450929 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220"} err="failed to get container status \"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220\": rpc error: code = NotFound desc = could not find container \"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220\": container with ID starting with 1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220 not found: ID does not exist" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.451652 4858 scope.go:117] "RemoveContainer" containerID="37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a" Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.455792 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a\": container with ID starting with 37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a not found: ID does not exist" containerID="37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.455946 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a"} err="failed to get container status \"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a\": rpc error: code = NotFound desc = could not find container \"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a\": container with ID starting with 37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a not found: ID does not exist" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.455986 4858 scope.go:117] "RemoveContainer" containerID="1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.456450 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220"} err="failed to get container status \"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220\": rpc error: code = NotFound desc = could not find container \"1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220\": container with ID starting with 1a9d409491c161bdf79546c181b6ab12fd66a6a4b31528a932b9660cca1d5220 not found: ID does not exist" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.456471 4858 scope.go:117] "RemoveContainer" containerID="37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.462528 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a"} err="failed to get container status \"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a\": rpc error: code = NotFound desc = could not find container \"37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a\": container with ID starting with 37e770ca981b8d9376977eb394a5f7e119b17337cba264d13af0a20f81509d4a not found: ID does not exist" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.462604 4858 scope.go:117] "RemoveContainer" containerID="d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.518930 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.519008 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.519026 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjkx9\" (UniqueName: \"kubernetes.io/projected/2f66fea8-f112-476f-809c-dfb782625728-kube-api-access-xjkx9\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.519040 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f66fea8-f112-476f-809c-dfb782625728-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.519050 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.519067 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.534187 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f66fea8-f112-476f-809c-dfb782625728" (UID: "2f66fea8-f112-476f-809c-dfb782625728"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.548126 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.603509 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-config-data" (OuterVolumeSpecName: "config-data") pod "2f66fea8-f112-476f-809c-dfb782625728" (UID: "2f66fea8-f112-476f-809c-dfb782625728"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.624238 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.624580 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.624710 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f66fea8-f112-476f-809c-dfb782625728-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.746686 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.756591 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.767112 4858 scope.go:117] "RemoveContainer" containerID="06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.770188 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.778645 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.791234 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.791709 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-httpd" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.791728 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-httpd" Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.791740 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-log" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.791746 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-log" Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.791759 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-httpd" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.791766 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-httpd" Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.791801 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-log" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.791806 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-log" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.791969 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-log" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.791993 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-log" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.792004 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" containerName="glance-httpd" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.792020 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f66fea8-f112-476f-809c-dfb782625728" containerName="glance-httpd" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.799314 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.803229 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.805154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.805302 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xtrkf" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.805456 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.818218 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.820124 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.824128 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.824355 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.841802 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.851505 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.924585 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946101 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946193 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946210 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-config-data\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-logs\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946257 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l654n\" (UniqueName: \"kubernetes.io/projected/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-kube-api-access-l654n\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946321 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946342 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwxm\" (UniqueName: \"kubernetes.io/projected/01d3077e-3576-4247-a840-2cb60819c113-kube-api-access-6jwxm\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-scripts\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946608 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.946635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-logs\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.948489 4858 scope.go:117] "RemoveContainer" containerID="d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e" Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.958108 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e\": container with ID starting with d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e not found: ID does not exist" containerID="d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.958194 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e"} err="failed to get container status \"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e\": rpc error: code = NotFound desc = could not find container \"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e\": container with ID starting with d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e not found: ID does not exist" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.959345 4858 scope.go:117] "RemoveContainer" containerID="06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3" Jan 27 20:29:51 crc kubenswrapper[4858]: E0127 20:29:51.965110 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3\": container with ID starting with 06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3 not found: ID does not exist" containerID="06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.965183 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3"} err="failed to get container status \"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3\": rpc error: code = NotFound desc = could not find container \"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3\": container with ID starting with 06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3 not found: ID does not exist" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.965221 4858 scope.go:117] "RemoveContainer" containerID="d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.969043 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e"} err="failed to get container status \"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e\": rpc error: code = NotFound desc = could not find container \"d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e\": container with ID starting with d15b1fba0b04c8bc310cee7c321d1d52eb0994368e027f521ac27ade56517a1e not found: ID does not exist" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.969094 4858 scope.go:117] "RemoveContainer" containerID="06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3" Jan 27 20:29:51 crc kubenswrapper[4858]: I0127 20:29:51.969574 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3"} err="failed to get container status \"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3\": rpc error: code = NotFound desc = could not find container \"06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3\": container with ID starting with 06938f68343e2e697ef8bc3be39ae3022e7613e30290590444d2da37a7b6dfe3 not found: ID does not exist" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.051766 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46phc\" (UniqueName: \"kubernetes.io/projected/734f1877-8907-44ff-b8af-c1a5f1b1395d-kube-api-access-46phc\") pod \"734f1877-8907-44ff-b8af-c1a5f1b1395d\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.051984 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-config\") pod \"734f1877-8907-44ff-b8af-c1a5f1b1395d\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.052075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-combined-ca-bundle\") pod \"734f1877-8907-44ff-b8af-c1a5f1b1395d\" (UID: \"734f1877-8907-44ff-b8af-c1a5f1b1395d\") " Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwxm\" (UniqueName: \"kubernetes.io/projected/01d3077e-3576-4247-a840-2cb60819c113-kube-api-access-6jwxm\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-scripts\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053829 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-logs\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.053974 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.054053 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.054084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-config-data\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.054121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-logs\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.054171 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l654n\" (UniqueName: \"kubernetes.io/projected/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-kube-api-access-l654n\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.062791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/734f1877-8907-44ff-b8af-c1a5f1b1395d-kube-api-access-46phc" (OuterVolumeSpecName: "kube-api-access-46phc") pod "734f1877-8907-44ff-b8af-c1a5f1b1395d" (UID: "734f1877-8907-44ff-b8af-c1a5f1b1395d"). InnerVolumeSpecName "kube-api-access-46phc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.062901 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-logs\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.063069 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.063085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.063740 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.064885 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.067826 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-logs\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.068737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.069235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.072507 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.072706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-scripts\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.090518 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d53ee1-62ed-4848-88bb-40a58313340f" path="/var/lib/kubelet/pods/00d53ee1-62ed-4848-88bb-40a58313340f/volumes" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.091455 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f66fea8-f112-476f-809c-dfb782625728" path="/var/lib/kubelet/pods/2f66fea8-f112-476f-809c-dfb782625728/volumes" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.092385 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwxm\" (UniqueName: \"kubernetes.io/projected/01d3077e-3576-4247-a840-2cb60819c113-kube-api-access-6jwxm\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.099391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l654n\" (UniqueName: \"kubernetes.io/projected/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-kube-api-access-l654n\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.099876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.100644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-config-data\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.101602 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-scripts\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.102295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-config-data\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.117528 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.145047 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "734f1877-8907-44ff-b8af-c1a5f1b1395d" (UID: "734f1877-8907-44ff-b8af-c1a5f1b1395d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.156615 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46phc\" (UniqueName: \"kubernetes.io/projected/734f1877-8907-44ff-b8af-c1a5f1b1395d-kube-api-access-46phc\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.156665 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.161282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.167057 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-config" (OuterVolumeSpecName: "config") pod "734f1877-8907-44ff-b8af-c1a5f1b1395d" (UID: "734f1877-8907-44ff-b8af-c1a5f1b1395d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.181988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.233233 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.258393 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/734f1877-8907-44ff-b8af-c1a5f1b1395d-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.259079 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.462638 4858 generic.go:334] "Generic (PLEG): container finished" podID="3278beeb-52a3-4351-92f1-839e98e59395" containerID="1b263c9acdcec53a2477ac6ed780357fdbabc43bb0d855f08aef1df2722ee54e" exitCode=1 Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.463223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3278beeb-52a3-4351-92f1-839e98e59395","Type":"ContainerDied","Data":"1b263c9acdcec53a2477ac6ed780357fdbabc43bb0d855f08aef1df2722ee54e"} Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.464040 4858 scope.go:117] "RemoveContainer" containerID="1b263c9acdcec53a2477ac6ed780357fdbabc43bb0d855f08aef1df2722ee54e" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.497750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltx6z" event={"ID":"8dae3012-914a-4fdc-81b0-23dc98627b05","Type":"ContainerDied","Data":"c242687079d8393bf6b627b57754e87a8068b99262fd51b668befa18f96d68b9"} Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.497919 4858 generic.go:334] "Generic (PLEG): container finished" podID="8dae3012-914a-4fdc-81b0-23dc98627b05" containerID="c242687079d8393bf6b627b57754e87a8068b99262fd51b668befa18f96d68b9" exitCode=0 Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.546651 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jlwch" event={"ID":"734f1877-8907-44ff-b8af-c1a5f1b1395d","Type":"ContainerDied","Data":"c78f7601ab628ca8e384b75dd9e3d8684471c329d85b7b566ed2d7ec16e108f3"} Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.546744 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c78f7601ab628ca8e384b75dd9e3d8684471c329d85b7b566ed2d7ec16e108f3" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.546920 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jlwch" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.746892 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.749008 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.788041 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d9cb9f97-shwkc"] Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.788454 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" podUID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerName="dnsmasq-dns" containerID="cri-o://4a2d3c0b69d2803c548a955731080f645cb9ddf696bba50c21cd6fa56a3d4f68" gracePeriod=10 Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.791771 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.828711 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d8667ddb9-dmdvl"] Jan 27 20:29:52 crc kubenswrapper[4858]: E0127 20:29:52.831122 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="734f1877-8907-44ff-b8af-c1a5f1b1395d" containerName="neutron-db-sync" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.831147 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="734f1877-8907-44ff-b8af-c1a5f1b1395d" containerName="neutron-db-sync" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.831518 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="734f1877-8907-44ff-b8af-c1a5f1b1395d" containerName="neutron-db-sync" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.834969 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.883981 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.884020 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.884048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8wtw\" (UniqueName: \"kubernetes.io/projected/d76a7fd9-c6ae-468d-814d-4340d0312bcb-kube-api-access-d8wtw\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.884070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.884169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-config\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.884348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-svc\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.885062 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d8667ddb9-dmdvl"] Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.928114 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f9dfd55dd-q9n8v"] Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.930758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.940407 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.940631 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.940813 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.941015 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sk9xg" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.985940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-svc\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.985994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.986059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.987888 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-svc\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8wtw\" (UniqueName: \"kubernetes.io/projected/d76a7fd9-c6ae-468d-814d-4340d0312bcb-kube-api-access-d8wtw\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988070 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-ovndb-tls-certs\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgv9\" (UniqueName: \"kubernetes.io/projected/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-kube-api-access-cbgv9\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988371 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-combined-ca-bundle\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-httpd-config\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988527 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-swift-storage-0\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-config\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:52 crc kubenswrapper[4858]: I0127 20:29:52.988818 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-config\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.000463 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-nb\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.000513 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-sb\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.005737 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f9dfd55dd-q9n8v"] Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.007102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-config\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.026431 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8wtw\" (UniqueName: \"kubernetes.io/projected/d76a7fd9-c6ae-468d-814d-4340d0312bcb-kube-api-access-d8wtw\") pod \"dnsmasq-dns-d8667ddb9-dmdvl\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.091241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgv9\" (UniqueName: \"kubernetes.io/projected/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-kube-api-access-cbgv9\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.091306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-combined-ca-bundle\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.091373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-httpd-config\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.091397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-config\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.091478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-ovndb-tls-certs\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.095087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-ovndb-tls-certs\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.098445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-combined-ca-bundle\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.103524 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-httpd-config\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.112414 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-config\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.130485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgv9\" (UniqueName: \"kubernetes.io/projected/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-kube-api-access-cbgv9\") pod \"neutron-f9dfd55dd-q9n8v\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.141054 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.171741 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.172121 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.205404 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.280330 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.323446 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.376604 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.448823 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.604431 4858 generic.go:334] "Generic (PLEG): container finished" podID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerID="4a2d3c0b69d2803c548a955731080f645cb9ddf696bba50c21cd6fa56a3d4f68" exitCode=0 Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.604510 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" event={"ID":"38968ed0-a4a3-49df-b3b1-9816c0b77497","Type":"ContainerDied","Data":"4a2d3c0b69d2803c548a955731080f645cb9ddf696bba50c21cd6fa56a3d4f68"} Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.604542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" event={"ID":"38968ed0-a4a3-49df-b3b1-9816c0b77497","Type":"ContainerDied","Data":"11759ddff94a513304a9f1780452895b94ec1f40b668353520a0536f7e82dbc5"} Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.604569 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11759ddff94a513304a9f1780452895b94ec1f40b668353520a0536f7e82dbc5" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.608105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508","Type":"ContainerStarted","Data":"6acb91f160503db50a5b4cf35d8fd00e145acfea6af2eaf197de4555e60d0fc3"} Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.610911 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"01d3077e-3576-4247-a840-2cb60819c113","Type":"ContainerStarted","Data":"51117531e0344114aa8578e89498ec566e883afdb6b6b0acb5a14f94f991dccf"} Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.641577 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3278beeb-52a3-4351-92f1-839e98e59395","Type":"ContainerStarted","Data":"3c4aab6b7e8f85c1a996922f0e2d67f712205508208d9a423e840409bfc5aa84"} Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.672528 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.742124 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.821530 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.838700 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-svc\") pod \"38968ed0-a4a3-49df-b3b1-9816c0b77497\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.838844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-nb\") pod \"38968ed0-a4a3-49df-b3b1-9816c0b77497\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.838930 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-sb\") pod \"38968ed0-a4a3-49df-b3b1-9816c0b77497\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.838973 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5rz2\" (UniqueName: \"kubernetes.io/projected/38968ed0-a4a3-49df-b3b1-9816c0b77497-kube-api-access-x5rz2\") pod \"38968ed0-a4a3-49df-b3b1-9816c0b77497\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.839038 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-config\") pod \"38968ed0-a4a3-49df-b3b1-9816c0b77497\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.839060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-swift-storage-0\") pod \"38968ed0-a4a3-49df-b3b1-9816c0b77497\" (UID: \"38968ed0-a4a3-49df-b3b1-9816c0b77497\") " Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.884435 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38968ed0-a4a3-49df-b3b1-9816c0b77497-kube-api-access-x5rz2" (OuterVolumeSpecName: "kube-api-access-x5rz2") pod "38968ed0-a4a3-49df-b3b1-9816c0b77497" (UID: "38968ed0-a4a3-49df-b3b1-9816c0b77497"). InnerVolumeSpecName "kube-api-access-x5rz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.919513 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38968ed0-a4a3-49df-b3b1-9816c0b77497" (UID: "38968ed0-a4a3-49df-b3b1-9816c0b77497"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.944672 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.944710 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5rz2\" (UniqueName: \"kubernetes.io/projected/38968ed0-a4a3-49df-b3b1-9816c0b77497-kube-api-access-x5rz2\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.978253 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.983183 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38968ed0-a4a3-49df-b3b1-9816c0b77497" (UID: "38968ed0-a4a3-49df-b3b1-9816c0b77497"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.988164 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38968ed0-a4a3-49df-b3b1-9816c0b77497" (UID: "38968ed0-a4a3-49df-b3b1-9816c0b77497"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:53 crc kubenswrapper[4858]: I0127 20:29:53.999153 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-config" (OuterVolumeSpecName: "config") pod "38968ed0-a4a3-49df-b3b1-9816c0b77497" (UID: "38968ed0-a4a3-49df-b3b1-9816c0b77497"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.008724 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38968ed0-a4a3-49df-b3b1-9816c0b77497" (UID: "38968ed0-a4a3-49df-b3b1-9816c0b77497"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.047717 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.047747 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.047757 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.047765 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38968ed0-a4a3-49df-b3b1-9816c0b77497-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.107130 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.109390 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f9dfd55dd-q9n8v"] Jan 27 20:29:54 crc kubenswrapper[4858]: E0127 20:29:54.116861 4858 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.241215 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.308306 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.331403 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d8667ddb9-dmdvl"] Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.351794 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.372253 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-scripts\") pod \"8dae3012-914a-4fdc-81b0-23dc98627b05\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.372373 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-combined-ca-bundle\") pod \"8dae3012-914a-4fdc-81b0-23dc98627b05\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.372429 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-credential-keys\") pod \"8dae3012-914a-4fdc-81b0-23dc98627b05\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.372481 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bwv2\" (UniqueName: \"kubernetes.io/projected/8dae3012-914a-4fdc-81b0-23dc98627b05-kube-api-access-8bwv2\") pod \"8dae3012-914a-4fdc-81b0-23dc98627b05\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.372514 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-config-data\") pod \"8dae3012-914a-4fdc-81b0-23dc98627b05\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.372737 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-fernet-keys\") pod \"8dae3012-914a-4fdc-81b0-23dc98627b05\" (UID: \"8dae3012-914a-4fdc-81b0-23dc98627b05\") " Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.389801 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8dae3012-914a-4fdc-81b0-23dc98627b05" (UID: "8dae3012-914a-4fdc-81b0-23dc98627b05"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.390862 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-scripts" (OuterVolumeSpecName: "scripts") pod "8dae3012-914a-4fdc-81b0-23dc98627b05" (UID: "8dae3012-914a-4fdc-81b0-23dc98627b05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.391951 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dae3012-914a-4fdc-81b0-23dc98627b05-kube-api-access-8bwv2" (OuterVolumeSpecName: "kube-api-access-8bwv2") pod "8dae3012-914a-4fdc-81b0-23dc98627b05" (UID: "8dae3012-914a-4fdc-81b0-23dc98627b05"). InnerVolumeSpecName "kube-api-access-8bwv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.398100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8dae3012-914a-4fdc-81b0-23dc98627b05" (UID: "8dae3012-914a-4fdc-81b0-23dc98627b05"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.433819 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dae3012-914a-4fdc-81b0-23dc98627b05" (UID: "8dae3012-914a-4fdc-81b0-23dc98627b05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.437994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-config-data" (OuterVolumeSpecName: "config-data") pod "8dae3012-914a-4fdc-81b0-23dc98627b05" (UID: "8dae3012-914a-4fdc-81b0-23dc98627b05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.474203 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.474232 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.474241 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.474250 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bwv2\" (UniqueName: \"kubernetes.io/projected/8dae3012-914a-4fdc-81b0-23dc98627b05-kube-api-access-8bwv2\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.474259 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.474269 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8dae3012-914a-4fdc-81b0-23dc98627b05-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.788131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f9dfd55dd-q9n8v" event={"ID":"3d6da88b-9fe4-4e4f-afdf-f8dccf939679","Type":"ContainerStarted","Data":"aebd9c28c62e3cd62ffe1414a932881ca75567f27466f7217aedfe456dcd20db"} Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.800621 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5bf568dbc7-xlg4d"] Jan 27 20:29:54 crc kubenswrapper[4858]: E0127 20:29:54.801109 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dae3012-914a-4fdc-81b0-23dc98627b05" containerName="keystone-bootstrap" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.801122 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dae3012-914a-4fdc-81b0-23dc98627b05" containerName="keystone-bootstrap" Jan 27 20:29:54 crc kubenswrapper[4858]: E0127 20:29:54.801137 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerName="dnsmasq-dns" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.801143 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerName="dnsmasq-dns" Jan 27 20:29:54 crc kubenswrapper[4858]: E0127 20:29:54.801160 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerName="init" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.801166 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerName="init" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.801351 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="38968ed0-a4a3-49df-b3b1-9816c0b77497" containerName="dnsmasq-dns" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.801362 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dae3012-914a-4fdc-81b0-23dc98627b05" containerName="keystone-bootstrap" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.802090 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.811411 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.820736 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.825504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" event={"ID":"d76a7fd9-c6ae-468d-814d-4340d0312bcb","Type":"ContainerStarted","Data":"cc27a03eef5aee0267efa78e92ff2a000871ec149dbda9b3aac14ef68f4cc030"} Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.829115 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bf568dbc7-xlg4d"] Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.888927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ltx6z" event={"ID":"8dae3012-914a-4fdc-81b0-23dc98627b05","Type":"ContainerDied","Data":"4b5673672fab895a02130bf3f064b895bcb5f15eb96c97f67a0aebc297f06be7"} Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.888983 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b5673672fab895a02130bf3f064b895bcb5f15eb96c97f67a0aebc297f06be7" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.889089 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ltx6z" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.889412 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d9cb9f97-shwkc" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.891909 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.893242 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-scripts\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.893370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-internal-tls-certs\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.893451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqgw2\" (UniqueName: \"kubernetes.io/projected/2c447296-df73-4efb-b85b-dc9d468d2d80-kube-api-access-gqgw2\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.893630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-fernet-keys\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.893761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-public-tls-certs\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.894095 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-config-data\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.894184 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-combined-ca-bundle\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.894338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-credential-keys\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.984702 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d9cb9f97-shwkc"] Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.995279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-credential-keys\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.995617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-scripts\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.995720 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-internal-tls-certs\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.995793 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqgw2\" (UniqueName: \"kubernetes.io/projected/2c447296-df73-4efb-b85b-dc9d468d2d80-kube-api-access-gqgw2\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.995901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-fernet-keys\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.995995 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-public-tls-certs\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.996068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-config-data\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:54 crc kubenswrapper[4858]: I0127 20:29:54.996146 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-combined-ca-bundle\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.000505 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9d9cb9f97-shwkc"] Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.012532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-fernet-keys\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.012995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-internal-tls-certs\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.013216 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-scripts\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.015188 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-credential-keys\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.017223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-config-data\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.032792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-public-tls-certs\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.050414 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqgw2\" (UniqueName: \"kubernetes.io/projected/2c447296-df73-4efb-b85b-dc9d468d2d80-kube-api-access-gqgw2\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.051413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c447296-df73-4efb-b85b-dc9d468d2d80-combined-ca-bundle\") pod \"keystone-5bf568dbc7-xlg4d\" (UID: \"2c447296-df73-4efb-b85b-dc9d468d2d80\") " pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.058728 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.123301 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.204890 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.747610 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c77755cc5-ffvng"] Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.751118 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.755201 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c77755cc5-ffvng"] Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.759427 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.760348 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.823516 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bf568dbc7-xlg4d"] Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.823774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-config\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.928445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-httpd-config\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.929125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-internal-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.929211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-combined-ca-bundle\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.929246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-config\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.929269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6csx\" (UniqueName: \"kubernetes.io/projected/4991d38c-6548-43d3-b4b7-884b71af9f07-kube-api-access-v6csx\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.929288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-public-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.929316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-ovndb-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.945196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-config\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.950801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f9dfd55dd-q9n8v" event={"ID":"3d6da88b-9fe4-4e4f-afdf-f8dccf939679","Type":"ContainerStarted","Data":"e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08"} Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.956331 4858 generic.go:334] "Generic (PLEG): container finished" podID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerID="e836f5bc5c93f2e6ffbc44231a52e811850d0a0d575df0dda1ea9f7ece0325c8" exitCode=0 Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.956419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" event={"ID":"d76a7fd9-c6ae-468d-814d-4340d0312bcb","Type":"ContainerDied","Data":"e836f5bc5c93f2e6ffbc44231a52e811850d0a0d575df0dda1ea9f7ece0325c8"} Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.968258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508","Type":"ContainerStarted","Data":"a0393d519aed8fda89dc4bb31ff6c539972cf05f32fc03dd05146e42e6ce41b4"} Jan 27 20:29:55 crc kubenswrapper[4858]: I0127 20:29:55.986945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf568dbc7-xlg4d" event={"ID":"2c447296-df73-4efb-b85b-dc9d468d2d80","Type":"ContainerStarted","Data":"f991d3b9f64eec906e9194143e1d93b22a7dc74d66526443d579467e502cdfb7"} Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.002884 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"01d3077e-3576-4247-a840-2cb60819c113","Type":"ContainerStarted","Data":"7d79048978fb62d2e5df62dc2ddbf6bd30ceeac4da867cc49ca0ef6342be60f8"} Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.002925 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="b757c9de-8297-419d-9048-72cdf387c52d" containerName="watcher-applier" containerID="cri-o://5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d" gracePeriod=30 Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.043065 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-combined-ca-bundle\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.043373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6csx\" (UniqueName: \"kubernetes.io/projected/4991d38c-6548-43d3-b4b7-884b71af9f07-kube-api-access-v6csx\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.043979 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-public-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.044040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-ovndb-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.044096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-httpd-config\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.044235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-internal-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.055011 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-httpd-config\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.056837 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-ovndb-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.057089 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-public-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.057829 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-combined-ca-bundle\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.060133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4991d38c-6548-43d3-b4b7-884b71af9f07-internal-tls-certs\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.073809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6csx\" (UniqueName: \"kubernetes.io/projected/4991d38c-6548-43d3-b4b7-884b71af9f07-kube-api-access-v6csx\") pod \"neutron-5c77755cc5-ffvng\" (UID: \"4991d38c-6548-43d3-b4b7-884b71af9f07\") " pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.099602 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38968ed0-a4a3-49df-b3b1-9816c0b77497" path="/var/lib/kubelet/pods/38968ed0-a4a3-49df-b3b1-9816c0b77497/volumes" Jan 27 20:29:56 crc kubenswrapper[4858]: I0127 20:29:56.127013 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.023517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f9dfd55dd-q9n8v" event={"ID":"3d6da88b-9fe4-4e4f-afdf-f8dccf939679","Type":"ContainerStarted","Data":"739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f"} Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.025862 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.041756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" event={"ID":"d76a7fd9-c6ae-468d-814d-4340d0312bcb","Type":"ContainerStarted","Data":"082aa311904c1f0750e52f3098be55a0f8a014c2c5ce9bb0384e3bfcdc163eef"} Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.041920 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.059937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508","Type":"ContainerStarted","Data":"d4d28fe9406c16a682feccd8d2ec588758e8d95a1ded305d26db615a3a1729c4"} Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.061410 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f9dfd55dd-q9n8v" podStartSLOduration=5.061387608 podStartE2EDuration="5.061387608s" podCreationTimestamp="2026-01-27 20:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:57.043220671 +0000 UTC m=+1341.751036377" watchObservedRunningTime="2026-01-27 20:29:57.061387608 +0000 UTC m=+1341.769203304" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.067689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bf568dbc7-xlg4d" event={"ID":"2c447296-df73-4efb-b85b-dc9d468d2d80","Type":"ContainerStarted","Data":"b1dba4a66d6b304d10bd6ef34c160902ed4e2ee801d73f862ecd296b392d9cda"} Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.068675 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.093451 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="3278beeb-52a3-4351-92f1-839e98e59395" containerName="watcher-decision-engine" containerID="cri-o://3c4aab6b7e8f85c1a996922f0e2d67f712205508208d9a423e840409bfc5aa84" gracePeriod=30 Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.093738 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"01d3077e-3576-4247-a840-2cb60819c113","Type":"ContainerStarted","Data":"70d9d0e5fd8930a93991dc7409a9cf825437ba8786be526d91ab11d45308c4a0"} Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.101261 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" podStartSLOduration=5.101230903 podStartE2EDuration="5.101230903s" podCreationTimestamp="2026-01-27 20:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:57.075722584 +0000 UTC m=+1341.783538290" watchObservedRunningTime="2026-01-27 20:29:57.101230903 +0000 UTC m=+1341.809046610" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.109679 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c77755cc5-ffvng"] Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.130653 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.130624576 podStartE2EDuration="6.130624576s" podCreationTimestamp="2026-01-27 20:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:57.12251006 +0000 UTC m=+1341.830325776" watchObservedRunningTime="2026-01-27 20:29:57.130624576 +0000 UTC m=+1341.838440282" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.159275 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5bf568dbc7-xlg4d" podStartSLOduration=3.159242826 podStartE2EDuration="3.159242826s" podCreationTimestamp="2026-01-27 20:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:57.145501377 +0000 UTC m=+1341.853317093" watchObservedRunningTime="2026-01-27 20:29:57.159242826 +0000 UTC m=+1341.867058542" Jan 27 20:29:57 crc kubenswrapper[4858]: I0127 20:29:57.181794 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.181762819 podStartE2EDuration="6.181762819s" podCreationTimestamp="2026-01-27 20:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:29:57.174323663 +0000 UTC m=+1341.882139399" watchObservedRunningTime="2026-01-27 20:29:57.181762819 +0000 UTC m=+1341.889578525" Jan 27 20:29:58 crc kubenswrapper[4858]: I0127 20:29:58.112139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c77755cc5-ffvng" event={"ID":"4991d38c-6548-43d3-b4b7-884b71af9f07","Type":"ContainerStarted","Data":"16a1be58aee56a09c03f5297cc08a147f8f04419600e9d64b301ec19b030dbe9"} Jan 27 20:29:58 crc kubenswrapper[4858]: I0127 20:29:58.113081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c77755cc5-ffvng" event={"ID":"4991d38c-6548-43d3-b4b7-884b71af9f07","Type":"ContainerStarted","Data":"78366ebdb53af1cc87cc33c2a826ac6963da617395f462c8d129bca150852717"} Jan 27 20:29:58 crc kubenswrapper[4858]: E0127 20:29:58.372111 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d is running failed: container process not found" containerID="5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 20:29:58 crc kubenswrapper[4858]: E0127 20:29:58.372583 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d is running failed: container process not found" containerID="5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 20:29:58 crc kubenswrapper[4858]: E0127 20:29:58.372834 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d is running failed: container process not found" containerID="5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 27 20:29:58 crc kubenswrapper[4858]: E0127 20:29:58.372868 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="b757c9de-8297-419d-9048-72cdf387c52d" containerName="watcher-applier" Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.043002 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.056115 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.187910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nsgb9" event={"ID":"8222b78c-e8de-4992-8c5b-bcf030d629ff","Type":"ContainerStarted","Data":"a48d4202a2867e87a32c9e97495a3047369823ace0126b45c61d27e9af6d4c1e"} Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.216249 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nsgb9" podStartSLOduration=4.555353011 podStartE2EDuration="47.216222899s" podCreationTimestamp="2026-01-27 20:29:12 +0000 UTC" firstStartedPulling="2026-01-27 20:29:14.552186448 +0000 UTC m=+1299.260002154" lastFinishedPulling="2026-01-27 20:29:57.213056336 +0000 UTC m=+1341.920872042" observedRunningTime="2026-01-27 20:29:59.211184573 +0000 UTC m=+1343.919000299" watchObservedRunningTime="2026-01-27 20:29:59.216222899 +0000 UTC m=+1343.924038626" Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.221421 4858 generic.go:334] "Generic (PLEG): container finished" podID="3278beeb-52a3-4351-92f1-839e98e59395" containerID="3c4aab6b7e8f85c1a996922f0e2d67f712205508208d9a423e840409bfc5aa84" exitCode=1 Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.221583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3278beeb-52a3-4351-92f1-839e98e59395","Type":"ContainerDied","Data":"3c4aab6b7e8f85c1a996922f0e2d67f712205508208d9a423e840409bfc5aa84"} Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.221660 4858 scope.go:117] "RemoveContainer" containerID="1b263c9acdcec53a2477ac6ed780357fdbabc43bb0d855f08aef1df2722ee54e" Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.232994 4858 generic.go:334] "Generic (PLEG): container finished" podID="b757c9de-8297-419d-9048-72cdf387c52d" containerID="5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d" exitCode=0 Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.233450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b757c9de-8297-419d-9048-72cdf387c52d","Type":"ContainerDied","Data":"5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d"} Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.331813 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:29:59 crc kubenswrapper[4858]: I0127 20:29:59.331883 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.146395 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75"] Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.148245 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.155513 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75"] Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.191061 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.191362 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.264086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a65f9c3-3b88-4bab-830f-00ba01b22f20-config-volume\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.265152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzsc\" (UniqueName: \"kubernetes.io/projected/6a65f9c3-3b88-4bab-830f-00ba01b22f20-kube-api-access-pvzsc\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.265443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a65f9c3-3b88-4bab-830f-00ba01b22f20-secret-volume\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.367773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzsc\" (UniqueName: \"kubernetes.io/projected/6a65f9c3-3b88-4bab-830f-00ba01b22f20-kube-api-access-pvzsc\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.367981 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a65f9c3-3b88-4bab-830f-00ba01b22f20-secret-volume\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.368048 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a65f9c3-3b88-4bab-830f-00ba01b22f20-config-volume\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.369256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a65f9c3-3b88-4bab-830f-00ba01b22f20-config-volume\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.378339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a65f9c3-3b88-4bab-830f-00ba01b22f20-secret-volume\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.385572 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzsc\" (UniqueName: \"kubernetes.io/projected/6a65f9c3-3b88-4bab-830f-00ba01b22f20-kube-api-access-pvzsc\") pod \"collect-profiles-29492430-dcj75\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:00 crc kubenswrapper[4858]: I0127 20:30:00.506709 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.233892 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.234302 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.260514 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.260605 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.289011 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.290061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.298269 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.315036 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.338490 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.512024 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.540863 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.628631 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-config-data\") pod \"3278beeb-52a3-4351-92f1-839e98e59395\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.628685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-combined-ca-bundle\") pod \"3278beeb-52a3-4351-92f1-839e98e59395\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.628768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278beeb-52a3-4351-92f1-839e98e59395-logs\") pod \"3278beeb-52a3-4351-92f1-839e98e59395\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.628837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-combined-ca-bundle\") pod \"b757c9de-8297-419d-9048-72cdf387c52d\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.628943 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-config-data\") pod \"b757c9de-8297-419d-9048-72cdf387c52d\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.629056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b757c9de-8297-419d-9048-72cdf387c52d-logs\") pod \"b757c9de-8297-419d-9048-72cdf387c52d\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.629107 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-custom-prometheus-ca\") pod \"3278beeb-52a3-4351-92f1-839e98e59395\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.629255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk88c\" (UniqueName: \"kubernetes.io/projected/3278beeb-52a3-4351-92f1-839e98e59395-kube-api-access-gk88c\") pod \"3278beeb-52a3-4351-92f1-839e98e59395\" (UID: \"3278beeb-52a3-4351-92f1-839e98e59395\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.629300 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8kv\" (UniqueName: \"kubernetes.io/projected/b757c9de-8297-419d-9048-72cdf387c52d-kube-api-access-6c8kv\") pod \"b757c9de-8297-419d-9048-72cdf387c52d\" (UID: \"b757c9de-8297-419d-9048-72cdf387c52d\") " Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.636863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b757c9de-8297-419d-9048-72cdf387c52d-logs" (OuterVolumeSpecName: "logs") pod "b757c9de-8297-419d-9048-72cdf387c52d" (UID: "b757c9de-8297-419d-9048-72cdf387c52d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.638216 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3278beeb-52a3-4351-92f1-839e98e59395-logs" (OuterVolumeSpecName: "logs") pod "3278beeb-52a3-4351-92f1-839e98e59395" (UID: "3278beeb-52a3-4351-92f1-839e98e59395"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.641373 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b757c9de-8297-419d-9048-72cdf387c52d-kube-api-access-6c8kv" (OuterVolumeSpecName: "kube-api-access-6c8kv") pod "b757c9de-8297-419d-9048-72cdf387c52d" (UID: "b757c9de-8297-419d-9048-72cdf387c52d"). InnerVolumeSpecName "kube-api-access-6c8kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.641743 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3278beeb-52a3-4351-92f1-839e98e59395-kube-api-access-gk88c" (OuterVolumeSpecName: "kube-api-access-gk88c") pod "3278beeb-52a3-4351-92f1-839e98e59395" (UID: "3278beeb-52a3-4351-92f1-839e98e59395"). InnerVolumeSpecName "kube-api-access-gk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.677756 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3278beeb-52a3-4351-92f1-839e98e59395" (UID: "3278beeb-52a3-4351-92f1-839e98e59395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.711903 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b757c9de-8297-419d-9048-72cdf387c52d" (UID: "b757c9de-8297-419d-9048-72cdf387c52d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.734025 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b757c9de-8297-419d-9048-72cdf387c52d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.734077 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk88c\" (UniqueName: \"kubernetes.io/projected/3278beeb-52a3-4351-92f1-839e98e59395-kube-api-access-gk88c\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.734092 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8kv\" (UniqueName: \"kubernetes.io/projected/b757c9de-8297-419d-9048-72cdf387c52d-kube-api-access-6c8kv\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.734107 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.734118 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3278beeb-52a3-4351-92f1-839e98e59395-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.734131 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.741875 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.742122 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api-log" containerID="cri-o://feaba0008f23f1e7aee49e3e0f41aa88c51ba0b941c77f6182e459112d8408b9" gracePeriod=30 Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.742639 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api" containerID="cri-o://c6d284b1a3bea0cf002332c36984d2ec019deb16b0466ac5b771dc9aff758b76" gracePeriod=30 Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.755590 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f7fd77bcb-cxmbt" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.802988 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3278beeb-52a3-4351-92f1-839e98e59395" (UID: "3278beeb-52a3-4351-92f1-839e98e59395"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.835652 4858 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.871436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-config-data" (OuterVolumeSpecName: "config-data") pod "3278beeb-52a3-4351-92f1-839e98e59395" (UID: "3278beeb-52a3-4351-92f1-839e98e59395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.877090 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-config-data" (OuterVolumeSpecName: "config-data") pod "b757c9de-8297-419d-9048-72cdf387c52d" (UID: "b757c9de-8297-419d-9048-72cdf387c52d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.937638 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3278beeb-52a3-4351-92f1-839e98e59395-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:02 crc kubenswrapper[4858]: I0127 20:30:02.938204 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b757c9de-8297-419d-9048-72cdf387c52d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.178116 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57556bc8bb-j4fhs" podUID="996129af-9ae9-44ca-b677-2c27bf71847d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.209797 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.295806 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-789b49c6fc-xkx87"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.296125 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" podUID="8b883308-9933-4034-91e2-5562130c6f10" containerName="dnsmasq-dns" containerID="cri-o://f36e73984958cfc9d6db231ecc55a91c7addac4daac8dcd6c320aa7606bd832b" gracePeriod=10 Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.312785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerStarted","Data":"b4e13937d9f6123c3847e871437efbd5c11818b2ed3824299b82634dd6f9b0cb"} Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.326959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b757c9de-8297-419d-9048-72cdf387c52d","Type":"ContainerDied","Data":"77daa581872a0e4e3da92345f882117415bf7e77f987bc259d0df8dc91c5fb3a"} Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.327023 4858 scope.go:117] "RemoveContainer" containerID="5154fe764232f3dce69be43769c997b5c6b5ea8c01c78c02a7b17a4d896ced4d" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.327155 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.345338 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c77755cc5-ffvng" event={"ID":"4991d38c-6548-43d3-b4b7-884b71af9f07","Type":"ContainerStarted","Data":"e7a3654ad92207081cd61b8320b42676abd5bc677bb9479a45c28059198a0bca"} Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.346003 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.360139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gc4mg" event={"ID":"b7c7b1cd-a2a1-4bd2-a57c-715448327967","Type":"ContainerStarted","Data":"cb1fcfbc38322e9f89cec41c1db7af41b384db137a28f509ce0209026038b3d1"} Jan 27 20:30:03 crc kubenswrapper[4858]: W0127 20:30:03.363632 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a65f9c3_3b88_4bab_830f_00ba01b22f20.slice/crio-c1e472ddf0c0f8e1da6ba628eb61b3a0f71b46a5564e673048a921413c030c90 WatchSource:0}: Error finding container c1e472ddf0c0f8e1da6ba628eb61b3a0f71b46a5564e673048a921413c030c90: Status 404 returned error can't find the container with id c1e472ddf0c0f8e1da6ba628eb61b3a0f71b46a5564e673048a921413c030c90 Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.364207 4858 generic.go:334] "Generic (PLEG): container finished" podID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerID="feaba0008f23f1e7aee49e3e0f41aa88c51ba0b941c77f6182e459112d8408b9" exitCode=143 Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.364285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a40095d8-0b5f-4fc9-a4e6-776a899d41e0","Type":"ContainerDied","Data":"feaba0008f23f1e7aee49e3e0f41aa88c51ba0b941c77f6182e459112d8408b9"} Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.366142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6n2n5" event={"ID":"047f39f4-e397-46e4-a998-4bf8060a1114","Type":"ContainerStarted","Data":"3405fccebf6de7872af9821078dd1c457d05027ef7264c9c69ead0bb38bec513"} Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.377665 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.377749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3278beeb-52a3-4351-92f1-839e98e59395","Type":"ContainerDied","Data":"24a63ec25650d6fabcb0e669a7e8483f6be6bf8debb823a0c4c397c414b47ef0"} Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.377800 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.377816 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.378094 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.379306 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.394278 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c77755cc5-ffvng" podStartSLOduration=8.394253677 podStartE2EDuration="8.394253677s" podCreationTimestamp="2026-01-27 20:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:03.378079938 +0000 UTC m=+1348.085895654" watchObservedRunningTime="2026-01-27 20:30:03.394253677 +0000 UTC m=+1348.102069383" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.447415 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6n2n5" podStartSLOduration=3.129516535 podStartE2EDuration="50.447243053s" podCreationTimestamp="2026-01-27 20:29:13 +0000 UTC" firstStartedPulling="2026-01-27 20:29:15.25372179 +0000 UTC m=+1299.961537496" lastFinishedPulling="2026-01-27 20:30:02.571448308 +0000 UTC m=+1347.279264014" observedRunningTime="2026-01-27 20:30:03.421184668 +0000 UTC m=+1348.129000374" watchObservedRunningTime="2026-01-27 20:30:03.447243053 +0000 UTC m=+1348.155058759" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.460190 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gc4mg" podStartSLOduration=7.53819124 podStartE2EDuration="50.460172138s" podCreationTimestamp="2026-01-27 20:29:13 +0000 UTC" firstStartedPulling="2026-01-27 20:29:15.724707017 +0000 UTC m=+1300.432522713" lastFinishedPulling="2026-01-27 20:29:58.646687905 +0000 UTC m=+1343.354503611" observedRunningTime="2026-01-27 20:30:03.445859443 +0000 UTC m=+1348.153675159" watchObservedRunningTime="2026-01-27 20:30:03.460172138 +0000 UTC m=+1348.167987844" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.546318 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.580387 4858 scope.go:117] "RemoveContainer" containerID="3c4aab6b7e8f85c1a996922f0e2d67f712205508208d9a423e840409bfc5aa84" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.596640 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.687094 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.706626 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: E0127 20:30:03.707183 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3278beeb-52a3-4351-92f1-839e98e59395" containerName="watcher-decision-engine" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.707204 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3278beeb-52a3-4351-92f1-839e98e59395" containerName="watcher-decision-engine" Jan 27 20:30:03 crc kubenswrapper[4858]: E0127 20:30:03.707218 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3278beeb-52a3-4351-92f1-839e98e59395" containerName="watcher-decision-engine" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.707225 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3278beeb-52a3-4351-92f1-839e98e59395" containerName="watcher-decision-engine" Jan 27 20:30:03 crc kubenswrapper[4858]: E0127 20:30:03.707239 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b757c9de-8297-419d-9048-72cdf387c52d" containerName="watcher-applier" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.707245 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b757c9de-8297-419d-9048-72cdf387c52d" containerName="watcher-applier" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.707472 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3278beeb-52a3-4351-92f1-839e98e59395" containerName="watcher-decision-engine" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.707497 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b757c9de-8297-419d-9048-72cdf387c52d" containerName="watcher-applier" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.708289 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.712908 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.718477 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.740015 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.740943 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3278beeb-52a3-4351-92f1-839e98e59395" containerName="watcher-decision-engine" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.741830 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.745233 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.747118 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.753372 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.886956 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16f660ae-e2f1-4e87-9e6a-83338f9228e9-logs\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887066 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-logs\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2lf\" (UniqueName: \"kubernetes.io/projected/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-kube-api-access-ql2lf\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887227 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f660ae-e2f1-4e87-9e6a-83338f9228e9-config-data\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlvtd\" (UniqueName: \"kubernetes.io/projected/16f660ae-e2f1-4e87-9e6a-83338f9228e9-kube-api-access-tlvtd\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f660ae-e2f1-4e87-9e6a-83338f9228e9-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.887582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.990899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f660ae-e2f1-4e87-9e6a-83338f9228e9-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16f660ae-e2f1-4e87-9e6a-83338f9228e9-logs\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991605 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-logs\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991692 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2lf\" (UniqueName: \"kubernetes.io/projected/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-kube-api-access-ql2lf\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f660ae-e2f1-4e87-9e6a-83338f9228e9-config-data\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.991754 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlvtd\" (UniqueName: \"kubernetes.io/projected/16f660ae-e2f1-4e87-9e6a-83338f9228e9-kube-api-access-tlvtd\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.993322 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16f660ae-e2f1-4e87-9e6a-83338f9228e9-logs\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:03 crc kubenswrapper[4858]: I0127 20:30:03.996948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-logs\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.007450 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f660ae-e2f1-4e87-9e6a-83338f9228e9-config-data\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.008010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.012331 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.012875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f660ae-e2f1-4e87-9e6a-83338f9228e9-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.028596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.037989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2lf\" (UniqueName: \"kubernetes.io/projected/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-kube-api-access-ql2lf\") pod \"watcher-decision-engine-0\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.038021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlvtd\" (UniqueName: \"kubernetes.io/projected/16f660ae-e2f1-4e87-9e6a-83338f9228e9-kube-api-access-tlvtd\") pod \"watcher-applier-0\" (UID: \"16f660ae-e2f1-4e87-9e6a-83338f9228e9\") " pod="openstack/watcher-applier-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.073816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.084161 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.129385 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3278beeb-52a3-4351-92f1-839e98e59395" path="/var/lib/kubelet/pods/3278beeb-52a3-4351-92f1-839e98e59395/volumes" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.156787 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b757c9de-8297-419d-9048-72cdf387c52d" path="/var/lib/kubelet/pods/b757c9de-8297-419d-9048-72cdf387c52d/volumes" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.406747 4858 generic.go:334] "Generic (PLEG): container finished" podID="6a65f9c3-3b88-4bab-830f-00ba01b22f20" containerID="dd316e48f868476ba9b94a82472e4f9be6a0ada9906a96f774e6a7d60dbcdb01" exitCode=0 Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.410610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" event={"ID":"6a65f9c3-3b88-4bab-830f-00ba01b22f20","Type":"ContainerDied","Data":"dd316e48f868476ba9b94a82472e4f9be6a0ada9906a96f774e6a7d60dbcdb01"} Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.410670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" event={"ID":"6a65f9c3-3b88-4bab-830f-00ba01b22f20","Type":"ContainerStarted","Data":"c1e472ddf0c0f8e1da6ba628eb61b3a0f71b46a5564e673048a921413c030c90"} Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.454278 4858 generic.go:334] "Generic (PLEG): container finished" podID="8b883308-9933-4034-91e2-5562130c6f10" containerID="f36e73984958cfc9d6db231ecc55a91c7addac4daac8dcd6c320aa7606bd832b" exitCode=0 Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.454511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" event={"ID":"8b883308-9933-4034-91e2-5562130c6f10","Type":"ContainerDied","Data":"f36e73984958cfc9d6db231ecc55a91c7addac4daac8dcd6c320aa7606bd832b"} Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.454563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" event={"ID":"8b883308-9933-4034-91e2-5562130c6f10","Type":"ContainerDied","Data":"43acb1747dd496ae396ed17824c3cfd46ac81cac217fa1bbf8d234639162781e"} Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.454577 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43acb1747dd496ae396ed17824c3cfd46ac81cac217fa1bbf8d234639162781e" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.459509 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.503397 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.625719 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-swift-storage-0\") pod \"8b883308-9933-4034-91e2-5562130c6f10\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.626138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwk2s\" (UniqueName: \"kubernetes.io/projected/8b883308-9933-4034-91e2-5562130c6f10-kube-api-access-pwk2s\") pod \"8b883308-9933-4034-91e2-5562130c6f10\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.626212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-config\") pod \"8b883308-9933-4034-91e2-5562130c6f10\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.626265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-nb\") pod \"8b883308-9933-4034-91e2-5562130c6f10\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.626303 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-svc\") pod \"8b883308-9933-4034-91e2-5562130c6f10\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.626345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-sb\") pod \"8b883308-9933-4034-91e2-5562130c6f10\" (UID: \"8b883308-9933-4034-91e2-5562130c6f10\") " Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.682141 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b883308-9933-4034-91e2-5562130c6f10-kube-api-access-pwk2s" (OuterVolumeSpecName: "kube-api-access-pwk2s") pod "8b883308-9933-4034-91e2-5562130c6f10" (UID: "8b883308-9933-4034-91e2-5562130c6f10"). InnerVolumeSpecName "kube-api-access-pwk2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.734942 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwk2s\" (UniqueName: \"kubernetes.io/projected/8b883308-9933-4034-91e2-5562130c6f10-kube-api-access-pwk2s\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.735321 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b883308-9933-4034-91e2-5562130c6f10" (UID: "8b883308-9933-4034-91e2-5562130c6f10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.747610 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-config" (OuterVolumeSpecName: "config") pod "8b883308-9933-4034-91e2-5562130c6f10" (UID: "8b883308-9933-4034-91e2-5562130c6f10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.810599 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8b883308-9933-4034-91e2-5562130c6f10" (UID: "8b883308-9933-4034-91e2-5562130c6f10"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.825080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b883308-9933-4034-91e2-5562130c6f10" (UID: "8b883308-9933-4034-91e2-5562130c6f10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.836663 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.836724 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.836736 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.836744 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.867918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b883308-9933-4034-91e2-5562130c6f10" (UID: "8b883308-9933-4034-91e2-5562130c6f10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:04 crc kubenswrapper[4858]: I0127 20:30:04.939890 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b883308-9933-4034-91e2-5562130c6f10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.017795 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 27 20:30:05 crc kubenswrapper[4858]: W0127 20:30:05.059337 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f660ae_e2f1_4e87_9e6a_83338f9228e9.slice/crio-3b46d67561645918da8d8612e2644e97d137fededc472ff7d842fca3e86d1a7a WatchSource:0}: Error finding container 3b46d67561645918da8d8612e2644e97d137fededc472ff7d842fca3e86d1a7a: Status 404 returned error can't find the container with id 3b46d67561645918da8d8612e2644e97d137fededc472ff7d842fca3e86d1a7a Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.241854 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.483633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"16f660ae-e2f1-4e87-9e6a-83338f9228e9","Type":"ContainerStarted","Data":"3b46d67561645918da8d8612e2644e97d137fededc472ff7d842fca3e86d1a7a"} Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.490398 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.493691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerStarted","Data":"51a590607a0baf8c11d99fc07451727abefdfca06fdf6ae5d8a0c1436d9b24d3"} Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.493843 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.493878 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.522701 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-789b49c6fc-xkx87"] Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.541347 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-789b49c6fc-xkx87"] Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.886586 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": read tcp 10.217.0.2:57890->10.217.0.161:9322: read: connection reset by peer" Jan 27 20:30:05 crc kubenswrapper[4858]: I0127 20:30:05.887325 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.161:9322/\": read tcp 10.217.0.2:57884->10.217.0.161:9322: read: connection reset by peer" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.049994 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.180705 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b883308-9933-4034-91e2-5562130c6f10" path="/var/lib/kubelet/pods/8b883308-9933-4034-91e2-5562130c6f10/volumes" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.192892 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a65f9c3-3b88-4bab-830f-00ba01b22f20-config-volume\") pod \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.193021 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a65f9c3-3b88-4bab-830f-00ba01b22f20-secret-volume\") pod \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.193053 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvzsc\" (UniqueName: \"kubernetes.io/projected/6a65f9c3-3b88-4bab-830f-00ba01b22f20-kube-api-access-pvzsc\") pod \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\" (UID: \"6a65f9c3-3b88-4bab-830f-00ba01b22f20\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.217670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a65f9c3-3b88-4bab-830f-00ba01b22f20-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a65f9c3-3b88-4bab-830f-00ba01b22f20" (UID: "6a65f9c3-3b88-4bab-830f-00ba01b22f20"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.230469 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a65f9c3-3b88-4bab-830f-00ba01b22f20-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a65f9c3-3b88-4bab-830f-00ba01b22f20" (UID: "6a65f9c3-3b88-4bab-830f-00ba01b22f20"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.230922 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a65f9c3-3b88-4bab-830f-00ba01b22f20-kube-api-access-pvzsc" (OuterVolumeSpecName: "kube-api-access-pvzsc") pod "6a65f9c3-3b88-4bab-830f-00ba01b22f20" (UID: "6a65f9c3-3b88-4bab-830f-00ba01b22f20"). InnerVolumeSpecName "kube-api-access-pvzsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.295737 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a65f9c3-3b88-4bab-830f-00ba01b22f20-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.295780 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a65f9c3-3b88-4bab-830f-00ba01b22f20-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.295790 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvzsc\" (UniqueName: \"kubernetes.io/projected/6a65f9c3-3b88-4bab-830f-00ba01b22f20-kube-api-access-pvzsc\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.404133 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.404880 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.510881 4858 generic.go:334] "Generic (PLEG): container finished" podID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerID="c6d284b1a3bea0cf002332c36984d2ec019deb16b0466ac5b771dc9aff758b76" exitCode=0 Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.510960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a40095d8-0b5f-4fc9-a4e6-776a899d41e0","Type":"ContainerDied","Data":"c6d284b1a3bea0cf002332c36984d2ec019deb16b0466ac5b771dc9aff758b76"} Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.510991 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a40095d8-0b5f-4fc9-a4e6-776a899d41e0","Type":"ContainerDied","Data":"27afcf0b2a2eb19d6d712310686a42187b444e5a7cad29d229e3bb0b2eff74ab"} Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.511006 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27afcf0b2a2eb19d6d712310686a42187b444e5a7cad29d229e3bb0b2eff74ab" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.512230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerStarted","Data":"432463a1935addf687650c2c490967ca0058fce8c9ee6475fba55ec2817ebc74"} Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.514460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" event={"ID":"6a65f9c3-3b88-4bab-830f-00ba01b22f20","Type":"ContainerDied","Data":"c1e472ddf0c0f8e1da6ba628eb61b3a0f71b46a5564e673048a921413c030c90"} Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.514488 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e472ddf0c0f8e1da6ba628eb61b3a0f71b46a5564e673048a921413c030c90" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.514533 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.533318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"16f660ae-e2f1-4e87-9e6a-83338f9228e9","Type":"ContainerStarted","Data":"2bfdfe5cb0f657b2cab9c6555a346fdaf7fd2a84965010e8d0bcdc25c3a12f63"} Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.538339 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.5383214020000002 podStartE2EDuration="3.538321402s" podCreationTimestamp="2026-01-27 20:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:06.532975247 +0000 UTC m=+1351.240790953" watchObservedRunningTime="2026-01-27 20:30:06.538321402 +0000 UTC m=+1351.246137108" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.549894 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.549872167 podStartE2EDuration="3.549872167s" podCreationTimestamp="2026-01-27 20:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:06.549588719 +0000 UTC m=+1351.257404425" watchObservedRunningTime="2026-01-27 20:30:06.549872167 +0000 UTC m=+1351.257687873" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.594566 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.708759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-combined-ca-bundle\") pod \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.709644 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-logs\") pod \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.709771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-config-data\") pod \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.709843 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-custom-prometheus-ca\") pod \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.710112 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9ss\" (UniqueName: \"kubernetes.io/projected/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-kube-api-access-bj9ss\") pod \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\" (UID: \"a40095d8-0b5f-4fc9-a4e6-776a899d41e0\") " Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.711622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-logs" (OuterVolumeSpecName: "logs") pod "a40095d8-0b5f-4fc9-a4e6-776a899d41e0" (UID: "a40095d8-0b5f-4fc9-a4e6-776a899d41e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.727967 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-kube-api-access-bj9ss" (OuterVolumeSpecName: "kube-api-access-bj9ss") pod "a40095d8-0b5f-4fc9-a4e6-776a899d41e0" (UID: "a40095d8-0b5f-4fc9-a4e6-776a899d41e0"). InnerVolumeSpecName "kube-api-access-bj9ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.749524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a40095d8-0b5f-4fc9-a4e6-776a899d41e0" (UID: "a40095d8-0b5f-4fc9-a4e6-776a899d41e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.773106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a40095d8-0b5f-4fc9-a4e6-776a899d41e0" (UID: "a40095d8-0b5f-4fc9-a4e6-776a899d41e0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: E0127 20:30:06.791993 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a65f9c3_3b88_4bab_830f_00ba01b22f20.slice\": RecentStats: unable to find data in memory cache]" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.803651 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-config-data" (OuterVolumeSpecName: "config-data") pod "a40095d8-0b5f-4fc9-a4e6-776a899d41e0" (UID: "a40095d8-0b5f-4fc9-a4e6-776a899d41e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.812542 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.812630 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.812643 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.812655 4858 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:06 crc kubenswrapper[4858]: I0127 20:30:06.812669 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj9ss\" (UniqueName: \"kubernetes.io/projected/a40095d8-0b5f-4fc9-a4e6-776a899d41e0-kube-api-access-bj9ss\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.219188 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.219382 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.360283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.545414 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.589814 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.604603 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.628571 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.654666 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:30:07 crc kubenswrapper[4858]: E0127 20:30:07.655214 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655240 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api" Jan 27 20:30:07 crc kubenswrapper[4858]: E0127 20:30:07.655261 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b883308-9933-4034-91e2-5562130c6f10" containerName="dnsmasq-dns" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655270 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b883308-9933-4034-91e2-5562130c6f10" containerName="dnsmasq-dns" Jan 27 20:30:07 crc kubenswrapper[4858]: E0127 20:30:07.655285 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api-log" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655295 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api-log" Jan 27 20:30:07 crc kubenswrapper[4858]: E0127 20:30:07.655334 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a65f9c3-3b88-4bab-830f-00ba01b22f20" containerName="collect-profiles" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655341 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a65f9c3-3b88-4bab-830f-00ba01b22f20" containerName="collect-profiles" Jan 27 20:30:07 crc kubenswrapper[4858]: E0127 20:30:07.655354 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b883308-9933-4034-91e2-5562130c6f10" containerName="init" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b883308-9933-4034-91e2-5562130c6f10" containerName="init" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655536 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a65f9c3-3b88-4bab-830f-00ba01b22f20" containerName="collect-profiles" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655574 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b883308-9933-4034-91e2-5562130c6f10" containerName="dnsmasq-dns" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655592 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api-log" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.655603 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" containerName="watcher-api" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.656678 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.665150 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.665358 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.665486 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.696725 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.760463 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxdp6\" (UniqueName: \"kubernetes.io/projected/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-kube-api-access-fxdp6\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.760565 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.760598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.760641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-logs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.760681 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-config-data\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.760703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.760723 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.861510 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxdp6\" (UniqueName: \"kubernetes.io/projected/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-kube-api-access-fxdp6\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.861729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.861763 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.861803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-logs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.861839 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-config-data\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.861865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.861882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.862840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-logs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.871830 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.872186 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.872308 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.880016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxdp6\" (UniqueName: \"kubernetes.io/projected/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-kube-api-access-fxdp6\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.880576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-config-data\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.882156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/49af05ef-dc73-4178-a4f8-ce9191c8fa3d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"49af05ef-dc73-4178-a4f8-ce9191c8fa3d\") " pod="openstack/watcher-api-0" Jan 27 20:30:07 crc kubenswrapper[4858]: I0127 20:30:07.991157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 27 20:30:08 crc kubenswrapper[4858]: I0127 20:30:08.097493 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40095d8-0b5f-4fc9-a4e6-776a899d41e0" path="/var/lib/kubelet/pods/a40095d8-0b5f-4fc9-a4e6-776a899d41e0/volumes" Jan 27 20:30:08 crc kubenswrapper[4858]: I0127 20:30:08.674953 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 27 20:30:08 crc kubenswrapper[4858]: W0127 20:30:08.692437 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49af05ef_dc73_4178_a4f8_ce9191c8fa3d.slice/crio-aabfb7668896836356857197b86c9d6bbe1314111670b6cd45481ae31054d8cf WatchSource:0}: Error finding container aabfb7668896836356857197b86c9d6bbe1314111670b6cd45481ae31054d8cf: Status 404 returned error can't find the container with id aabfb7668896836356857197b86c9d6bbe1314111670b6cd45481ae31054d8cf Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.075477 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.423243 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-789b49c6fc-xkx87" podUID="8b883308-9933-4034-91e2-5562130c6f10" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.157:5353: i/o timeout" Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.572006 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"49af05ef-dc73-4178-a4f8-ce9191c8fa3d","Type":"ContainerStarted","Data":"25c01b6048a97dc75a27e9308572d21327dd75d88625df77f00c8bc2ca8476c3"} Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.572076 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.572092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"49af05ef-dc73-4178-a4f8-ce9191c8fa3d","Type":"ContainerStarted","Data":"85acbc90a6da337cb41759e11c2741ef48725bcda99dc6214abf2498ad2a8c08"} Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.572105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"49af05ef-dc73-4178-a4f8-ce9191c8fa3d","Type":"ContainerStarted","Data":"aabfb7668896836356857197b86c9d6bbe1314111670b6cd45481ae31054d8cf"} Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.575522 4858 generic.go:334] "Generic (PLEG): container finished" podID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerID="432463a1935addf687650c2c490967ca0058fce8c9ee6475fba55ec2817ebc74" exitCode=1 Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.575602 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerDied","Data":"432463a1935addf687650c2c490967ca0058fce8c9ee6475fba55ec2817ebc74"} Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.576855 4858 scope.go:117] "RemoveContainer" containerID="432463a1935addf687650c2c490967ca0058fce8c9ee6475fba55ec2817ebc74" Jan 27 20:30:09 crc kubenswrapper[4858]: I0127 20:30:09.600982 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.600953527 podStartE2EDuration="2.600953527s" podCreationTimestamp="2026-01-27 20:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:09.592399828 +0000 UTC m=+1354.300215554" watchObservedRunningTime="2026-01-27 20:30:09.600953527 +0000 UTC m=+1354.308769243" Jan 27 20:30:10 crc kubenswrapper[4858]: I0127 20:30:10.588168 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerStarted","Data":"4fe2e17cab0c4bc7715a3d67286a23a4375609195c4d9d669b3262a2b09ce1d8"} Jan 27 20:30:11 crc kubenswrapper[4858]: I0127 20:30:11.627821 4858 generic.go:334] "Generic (PLEG): container finished" podID="8222b78c-e8de-4992-8c5b-bcf030d629ff" containerID="a48d4202a2867e87a32c9e97495a3047369823ace0126b45c61d27e9af6d4c1e" exitCode=0 Jan 27 20:30:11 crc kubenswrapper[4858]: I0127 20:30:11.627970 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nsgb9" event={"ID":"8222b78c-e8de-4992-8c5b-bcf030d629ff","Type":"ContainerDied","Data":"a48d4202a2867e87a32c9e97495a3047369823ace0126b45c61d27e9af6d4c1e"} Jan 27 20:30:11 crc kubenswrapper[4858]: I0127 20:30:11.633618 4858 generic.go:334] "Generic (PLEG): container finished" podID="047f39f4-e397-46e4-a998-4bf8060a1114" containerID="3405fccebf6de7872af9821078dd1c457d05027ef7264c9c69ead0bb38bec513" exitCode=0 Jan 27 20:30:11 crc kubenswrapper[4858]: I0127 20:30:11.633801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6n2n5" event={"ID":"047f39f4-e397-46e4-a998-4bf8060a1114","Type":"ContainerDied","Data":"3405fccebf6de7872af9821078dd1c457d05027ef7264c9c69ead0bb38bec513"} Jan 27 20:30:11 crc kubenswrapper[4858]: I0127 20:30:11.636721 4858 generic.go:334] "Generic (PLEG): container finished" podID="b7c7b1cd-a2a1-4bd2-a57c-715448327967" containerID="cb1fcfbc38322e9f89cec41c1db7af41b384db137a28f509ce0209026038b3d1" exitCode=0 Jan 27 20:30:11 crc kubenswrapper[4858]: I0127 20:30:11.636786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gc4mg" event={"ID":"b7c7b1cd-a2a1-4bd2-a57c-715448327967","Type":"ContainerDied","Data":"cb1fcfbc38322e9f89cec41c1db7af41b384db137a28f509ce0209026038b3d1"} Jan 27 20:30:11 crc kubenswrapper[4858]: I0127 20:30:11.886347 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 20:30:12 crc kubenswrapper[4858]: I0127 20:30:12.991663 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.678991 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gc4mg" event={"ID":"b7c7b1cd-a2a1-4bd2-a57c-715448327967","Type":"ContainerDied","Data":"a6d3b7f7f1cd6ae1e0defc7d2ecd0c6c80234fa3527cbff9d93a2fa32b318c2a"} Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.679715 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d3b7f7f1cd6ae1e0defc7d2ecd0c6c80234fa3527cbff9d93a2fa32b318c2a" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.681949 4858 generic.go:334] "Generic (PLEG): container finished" podID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerID="4fe2e17cab0c4bc7715a3d67286a23a4375609195c4d9d669b3262a2b09ce1d8" exitCode=1 Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.682021 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerDied","Data":"4fe2e17cab0c4bc7715a3d67286a23a4375609195c4d9d669b3262a2b09ce1d8"} Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.682090 4858 scope.go:117] "RemoveContainer" containerID="432463a1935addf687650c2c490967ca0058fce8c9ee6475fba55ec2817ebc74" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.683021 4858 scope.go:117] "RemoveContainer" containerID="4fe2e17cab0c4bc7715a3d67286a23a4375609195c4d9d669b3262a2b09ce1d8" Jan 27 20:30:13 crc kubenswrapper[4858]: E0127 20:30:13.683698 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(99d9c559-c61f-4bc2-907b-af9f9be0ce1b)\"" pod="openstack/watcher-decision-engine-0" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.697766 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nsgb9" event={"ID":"8222b78c-e8de-4992-8c5b-bcf030d629ff","Type":"ContainerDied","Data":"3fcf425070ff40665b2eee5bbd86f09a57f5085f491522ac01a4e53cb43bdc5a"} Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.697864 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fcf425070ff40665b2eee5bbd86f09a57f5085f491522ac01a4e53cb43bdc5a" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.703916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6n2n5" event={"ID":"047f39f4-e397-46e4-a998-4bf8060a1114","Type":"ContainerDied","Data":"14bbb9115745c4bd5d294ebaf2245b5823249659afe3741770d138b6054d39a5"} Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.704006 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14bbb9115745c4bd5d294ebaf2245b5823249659afe3741770d138b6054d39a5" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.722983 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.732768 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gc4mg" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.775295 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911463 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-db-sync-config-data\") pod \"8222b78c-e8de-4992-8c5b-bcf030d629ff\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911520 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-config-data\") pod \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911575 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwn2j\" (UniqueName: \"kubernetes.io/projected/b7c7b1cd-a2a1-4bd2-a57c-715448327967-kube-api-access-gwn2j\") pod \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911599 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-combined-ca-bundle\") pod \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911644 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-combined-ca-bundle\") pod \"8222b78c-e8de-4992-8c5b-bcf030d629ff\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c7b1cd-a2a1-4bd2-a57c-715448327967-logs\") pod \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911727 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdtxf\" (UniqueName: \"kubernetes.io/projected/8222b78c-e8de-4992-8c5b-bcf030d629ff-kube-api-access-wdtxf\") pod \"8222b78c-e8de-4992-8c5b-bcf030d629ff\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-scripts\") pod \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\" (UID: \"b7c7b1cd-a2a1-4bd2-a57c-715448327967\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911841 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-db-sync-config-data\") pod \"047f39f4-e397-46e4-a998-4bf8060a1114\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911877 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-combined-ca-bundle\") pod \"047f39f4-e397-46e4-a998-4bf8060a1114\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vr4w\" (UniqueName: \"kubernetes.io/projected/047f39f4-e397-46e4-a998-4bf8060a1114-kube-api-access-9vr4w\") pod \"047f39f4-e397-46e4-a998-4bf8060a1114\" (UID: \"047f39f4-e397-46e4-a998-4bf8060a1114\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.911986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-scripts\") pod \"8222b78c-e8de-4992-8c5b-bcf030d629ff\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.912015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8222b78c-e8de-4992-8c5b-bcf030d629ff-etc-machine-id\") pod \"8222b78c-e8de-4992-8c5b-bcf030d629ff\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.912051 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-config-data\") pod \"8222b78c-e8de-4992-8c5b-bcf030d629ff\" (UID: \"8222b78c-e8de-4992-8c5b-bcf030d629ff\") " Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.913955 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8222b78c-e8de-4992-8c5b-bcf030d629ff-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8222b78c-e8de-4992-8c5b-bcf030d629ff" (UID: "8222b78c-e8de-4992-8c5b-bcf030d629ff"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.916992 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7c7b1cd-a2a1-4bd2-a57c-715448327967-logs" (OuterVolumeSpecName: "logs") pod "b7c7b1cd-a2a1-4bd2-a57c-715448327967" (UID: "b7c7b1cd-a2a1-4bd2-a57c-715448327967"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.922340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8222b78c-e8de-4992-8c5b-bcf030d629ff" (UID: "8222b78c-e8de-4992-8c5b-bcf030d629ff"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.922500 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8222b78c-e8de-4992-8c5b-bcf030d629ff-kube-api-access-wdtxf" (OuterVolumeSpecName: "kube-api-access-wdtxf") pod "8222b78c-e8de-4992-8c5b-bcf030d629ff" (UID: "8222b78c-e8de-4992-8c5b-bcf030d629ff"). InnerVolumeSpecName "kube-api-access-wdtxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.922789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-scripts" (OuterVolumeSpecName: "scripts") pod "8222b78c-e8de-4992-8c5b-bcf030d629ff" (UID: "8222b78c-e8de-4992-8c5b-bcf030d629ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.922992 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "047f39f4-e397-46e4-a998-4bf8060a1114" (UID: "047f39f4-e397-46e4-a998-4bf8060a1114"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.923528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/047f39f4-e397-46e4-a998-4bf8060a1114-kube-api-access-9vr4w" (OuterVolumeSpecName: "kube-api-access-9vr4w") pod "047f39f4-e397-46e4-a998-4bf8060a1114" (UID: "047f39f4-e397-46e4-a998-4bf8060a1114"). InnerVolumeSpecName "kube-api-access-9vr4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.929780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-scripts" (OuterVolumeSpecName: "scripts") pod "b7c7b1cd-a2a1-4bd2-a57c-715448327967" (UID: "b7c7b1cd-a2a1-4bd2-a57c-715448327967"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.939890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c7b1cd-a2a1-4bd2-a57c-715448327967-kube-api-access-gwn2j" (OuterVolumeSpecName: "kube-api-access-gwn2j") pod "b7c7b1cd-a2a1-4bd2-a57c-715448327967" (UID: "b7c7b1cd-a2a1-4bd2-a57c-715448327967"). InnerVolumeSpecName "kube-api-access-gwn2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.952403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-config-data" (OuterVolumeSpecName: "config-data") pod "b7c7b1cd-a2a1-4bd2-a57c-715448327967" (UID: "b7c7b1cd-a2a1-4bd2-a57c-715448327967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.956271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8222b78c-e8de-4992-8c5b-bcf030d629ff" (UID: "8222b78c-e8de-4992-8c5b-bcf030d629ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.958259 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "047f39f4-e397-46e4-a998-4bf8060a1114" (UID: "047f39f4-e397-46e4-a998-4bf8060a1114"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.975310 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7c7b1cd-a2a1-4bd2-a57c-715448327967" (UID: "b7c7b1cd-a2a1-4bd2-a57c-715448327967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:13 crc kubenswrapper[4858]: I0127 20:30:13.993951 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-config-data" (OuterVolumeSpecName: "config-data") pod "8222b78c-e8de-4992-8c5b-bcf030d629ff" (UID: "8222b78c-e8de-4992-8c5b-bcf030d629ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015688 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015732 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/047f39f4-e397-46e4-a998-4bf8060a1114-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015744 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vr4w\" (UniqueName: \"kubernetes.io/projected/047f39f4-e397-46e4-a998-4bf8060a1114-kube-api-access-9vr4w\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015755 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015768 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8222b78c-e8de-4992-8c5b-bcf030d629ff-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015778 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015787 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015796 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015808 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwn2j\" (UniqueName: \"kubernetes.io/projected/b7c7b1cd-a2a1-4bd2-a57c-715448327967-kube-api-access-gwn2j\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015819 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015829 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8222b78c-e8de-4992-8c5b-bcf030d629ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015838 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7c7b1cd-a2a1-4bd2-a57c-715448327967-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015847 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdtxf\" (UniqueName: \"kubernetes.io/projected/8222b78c-e8de-4992-8c5b-bcf030d629ff-kube-api-access-wdtxf\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.015856 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c7b1cd-a2a1-4bd2-a57c-715448327967-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.084173 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.084893 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.084960 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.106625 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 27 20:30:14 crc kubenswrapper[4858]: E0127 20:30:14.555335 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.736519 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerStarted","Data":"2cf5f3ac8f311926f4caecec5cbe3beafca1edba62709a677207d7b3207c878a"} Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.736725 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="ceilometer-notification-agent" containerID="cri-o://aa84b43dd39168f5465057da4ffc0cf125da3e976c1b56bc5fb7f19c3ad83c36" gracePeriod=30 Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.736854 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.737119 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="sg-core" containerID="cri-o://b4e13937d9f6123c3847e871437efbd5c11818b2ed3824299b82634dd6f9b0cb" gracePeriod=30 Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.737538 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="proxy-httpd" containerID="cri-o://2cf5f3ac8f311926f4caecec5cbe3beafca1edba62709a677207d7b3207c878a" gracePeriod=30 Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.754494 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gc4mg" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.770198 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6n2n5" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.770338 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nsgb9" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.772437 4858 scope.go:117] "RemoveContainer" containerID="4fe2e17cab0c4bc7715a3d67286a23a4375609195c4d9d669b3262a2b09ce1d8" Jan 27 20:30:14 crc kubenswrapper[4858]: E0127 20:30:14.772738 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(99d9c559-c61f-4bc2-907b-af9f9be0ce1b)\"" pod="openstack/watcher-decision-engine-0" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" Jan 27 20:30:14 crc kubenswrapper[4858]: I0127 20:30:14.858312 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.040601 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f9b655566-275d7"] Jan 27 20:30:15 crc kubenswrapper[4858]: E0127 20:30:15.041391 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c7b1cd-a2a1-4bd2-a57c-715448327967" containerName="placement-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.041406 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c7b1cd-a2a1-4bd2-a57c-715448327967" containerName="placement-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: E0127 20:30:15.041450 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8222b78c-e8de-4992-8c5b-bcf030d629ff" containerName="cinder-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.041456 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8222b78c-e8de-4992-8c5b-bcf030d629ff" containerName="cinder-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: E0127 20:30:15.041467 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047f39f4-e397-46e4-a998-4bf8060a1114" containerName="barbican-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.041473 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="047f39f4-e397-46e4-a998-4bf8060a1114" containerName="barbican-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.041650 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="047f39f4-e397-46e4-a998-4bf8060a1114" containerName="barbican-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.041662 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8222b78c-e8de-4992-8c5b-bcf030d629ff" containerName="cinder-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.041675 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c7b1cd-a2a1-4bd2-a57c-715448327967" containerName="placement-db-sync" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.043129 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.062905 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-crgtq" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.063667 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.063786 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.063945 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.092108 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.111129 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f9b655566-275d7"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.145420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-combined-ca-bundle\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.145537 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-public-tls-certs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.145593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-internal-tls-certs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.145616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598238a6-e427-47db-b460-298627190cce-logs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.145652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-scripts\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.145685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-config-data\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.145718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hqm\" (UniqueName: \"kubernetes.io/projected/598238a6-e427-47db-b460-298627190cce-kube-api-access-z9hqm\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.248913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-public-tls-certs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.248982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-internal-tls-certs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.249013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598238a6-e427-47db-b460-298627190cce-logs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.249068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-scripts\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.249117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-config-data\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.249155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9hqm\" (UniqueName: \"kubernetes.io/projected/598238a6-e427-47db-b460-298627190cce-kube-api-access-z9hqm\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.249209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-combined-ca-bundle\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.257590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/598238a6-e427-47db-b460-298627190cce-logs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.271604 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.272784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-internal-tls-certs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.273375 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.273443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-scripts\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.274574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-public-tls-certs\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.274991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-config-data\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.279236 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598238a6-e427-47db-b460-298627190cce-combined-ca-bundle\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.338249 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.340235 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.350001 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9thbj" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.350195 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.350332 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.350514 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.351114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.351185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-svc\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.351212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-config\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.351235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc5sx\" (UniqueName: \"kubernetes.io/projected/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-kube-api-access-jc5sx\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.351271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.351309 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.369213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9hqm\" (UniqueName: \"kubernetes.io/projected/598238a6-e427-47db-b460-298627190cce-kube-api-access-z9hqm\") pod \"placement-5f9b655566-275d7\" (UID: \"598238a6-e427-47db-b460-298627190cce\") " pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.387448 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-58f4598744-qn5jn"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.389429 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.392129 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fxzv9" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.392443 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.392725 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.411826 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5598668497-6nzrb"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.412285 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.413641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.415408 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.442137 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5598668497-6nzrb"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-scripts\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455199 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5d8s\" (UniqueName: \"kubernetes.io/projected/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-kube-api-access-l5d8s\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455222 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455432 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-svc\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-config\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.455481 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc5sx\" (UniqueName: \"kubernetes.io/projected/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-kube-api-access-jc5sx\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.458950 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.460850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.461126 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-svc\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.463059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.463782 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-config\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.489453 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.504189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc5sx\" (UniqueName: \"kubernetes.io/projected/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-kube-api-access-jc5sx\") pod \"dnsmasq-dns-6b9bcc9b8c-rgnqw\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.507815 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.514114 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.528343 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58f4598744-qn5jn"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.563991 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-config-data-custom\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-scripts\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgb9r\" (UniqueName: \"kubernetes.io/projected/9927d309-f818-4163-9659-f7b6a060960e-kube-api-access-qgb9r\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564483 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-combined-ca-bundle\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9927d309-f818-4163-9659-f7b6a060960e-logs\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5d8s\" (UniqueName: \"kubernetes.io/projected/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-kube-api-access-l5d8s\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.564947 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-config-data\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcgvm\" (UniqueName: \"kubernetes.io/projected/9bb04320-907b-4d35-9c41-ea828a779f5d-kube-api-access-vcgvm\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565131 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-config-data\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-config-data-custom\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565324 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb04320-907b-4d35-9c41-ea828a779f5d-logs\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565542 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-combined-ca-bundle\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565680 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.565802 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.580977 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.582339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.582658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-scripts\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.587349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.626314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5d8s\" (UniqueName: \"kubernetes.io/projected/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-kube-api-access-l5d8s\") pod \"cinder-scheduler-0\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.667965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgb9r\" (UniqueName: \"kubernetes.io/projected/9927d309-f818-4163-9659-f7b6a060960e-kube-api-access-qgb9r\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668080 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-combined-ca-bundle\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9927d309-f818-4163-9659-f7b6a060960e-logs\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-config-data\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668287 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcgvm\" (UniqueName: \"kubernetes.io/projected/9bb04320-907b-4d35-9c41-ea828a779f5d-kube-api-access-vcgvm\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668320 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-config-data\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-config-data-custom\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb04320-907b-4d35-9c41-ea828a779f5d-logs\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-combined-ca-bundle\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.668834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-config-data-custom\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.672210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9927d309-f818-4163-9659-f7b6a060960e-logs\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.672522 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bb04320-907b-4d35-9c41-ea828a779f5d-logs\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.680665 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-config-data\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.683242 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-combined-ca-bundle\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.686263 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-combined-ca-bundle\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.689219 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-config-data\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.692424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcgvm\" (UniqueName: \"kubernetes.io/projected/9bb04320-907b-4d35-9c41-ea828a779f5d-kube-api-access-vcgvm\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.692507 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.694346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.703662 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgb9r\" (UniqueName: \"kubernetes.io/projected/9927d309-f818-4163-9659-f7b6a060960e-kube-api-access-qgb9r\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.704097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9927d309-f818-4163-9659-f7b6a060960e-config-data-custom\") pod \"barbican-keystone-listener-58f4598744-qn5jn\" (UID: \"9927d309-f818-4163-9659-f7b6a060960e\") " pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.707936 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.711793 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bb04320-907b-4d35-9c41-ea828a779f5d-config-data-custom\") pod \"barbican-worker-5598668497-6nzrb\" (UID: \"9bb04320-907b-4d35-9c41-ea828a779f5d\") " pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.732536 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.733690 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdp7x\" (UniqueName: \"kubernetes.io/projected/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-kube-api-access-rdp7x\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-logs\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772736 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data-custom\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772766 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-757c454848-p8szs"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.772822 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-scripts\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.774460 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.778576 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.784861 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerID="2cf5f3ac8f311926f4caecec5cbe3beafca1edba62709a677207d7b3207c878a" exitCode=0 Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.784901 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerID="b4e13937d9f6123c3847e871437efbd5c11818b2ed3824299b82634dd6f9b0cb" exitCode=2 Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.785477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerDied","Data":"2cf5f3ac8f311926f4caecec5cbe3beafca1edba62709a677207d7b3207c878a"} Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.785592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerDied","Data":"b4e13937d9f6123c3847e871437efbd5c11818b2ed3824299b82634dd6f9b0cb"} Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.795647 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.830403 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.851568 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-757c454848-p8szs"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.867852 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9655b799f-5tbtb"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874208 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data-custom\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-scripts\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdp7x\" (UniqueName: \"kubernetes.io/projected/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-kube-api-access-rdp7x\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-logs\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.874991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.880930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.887392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.896396 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-logs\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.896465 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9655b799f-5tbtb"] Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.902064 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5598668497-6nzrb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.911687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data-custom\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.917168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-scripts\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.922384 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdp7x\" (UniqueName: \"kubernetes.io/projected/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-kube-api-access-rdp7x\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.925257 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " pod="openstack/cinder-api-0" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.977483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-nb\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.977574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data-custom\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.977611 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.977708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-swift-storage-0\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.977995 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-sb\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.978047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-svc\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.978113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9qnh\" (UniqueName: \"kubernetes.io/projected/cee80805-8e2b-44cd-8a95-8d4cf21effcd-kube-api-access-j9qnh\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.978145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngg4j\" (UniqueName: \"kubernetes.io/projected/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-kube-api-access-ngg4j\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.978185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-config\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.978238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-combined-ca-bundle\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:15 crc kubenswrapper[4858]: I0127 20:30:15.978386 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-logs\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-nb\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data-custom\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080758 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-swift-storage-0\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-sb\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-svc\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9qnh\" (UniqueName: \"kubernetes.io/projected/cee80805-8e2b-44cd-8a95-8d4cf21effcd-kube-api-access-j9qnh\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080900 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngg4j\" (UniqueName: \"kubernetes.io/projected/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-kube-api-access-ngg4j\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080920 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-config\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.080946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-combined-ca-bundle\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.081011 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-logs\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.082358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-nb\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.086862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-swift-storage-0\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.089711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-config\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.090521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-logs\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.091623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-sb\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.092171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-svc\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.101985 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.107346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data-custom\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.110461 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-combined-ca-bundle\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.111110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9qnh\" (UniqueName: \"kubernetes.io/projected/cee80805-8e2b-44cd-8a95-8d4cf21effcd-kube-api-access-j9qnh\") pod \"dnsmasq-dns-9655b799f-5tbtb\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.112094 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngg4j\" (UniqueName: \"kubernetes.io/projected/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-kube-api-access-ngg4j\") pod \"barbican-api-757c454848-p8szs\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.223572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.238995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.261586 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f9b655566-275d7"] Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.264358 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.303939 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.313976 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.408189 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw"] Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.662090 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58f4598744-qn5jn"] Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.742615 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5598668497-6nzrb"] Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.833493 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" event={"ID":"9927d309-f818-4163-9659-f7b6a060960e","Type":"ContainerStarted","Data":"463d8ea074007bddff692181907de061c63e35c7100bdbfa2582364669142879"} Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.855142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" event={"ID":"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b","Type":"ContainerStarted","Data":"441578b30fd1bbe1b0208582d2d6c6043179bcef22b3b61c82962c207a7cb37c"} Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.883000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c","Type":"ContainerStarted","Data":"04f625e3e4fc97d10654024257243cdd11ae5ea4aa3f0690d1504c67d3448d0f"} Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.909429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9b655566-275d7" event={"ID":"598238a6-e427-47db-b460-298627190cce","Type":"ContainerStarted","Data":"c2879f7418d38080635e145cd5aa6be2c3840717040392dba9747f2a955c22ca"} Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.922040 4858 generic.go:334] "Generic (PLEG): container finished" podID="74707222-b7c2-4226-8df2-2459cb7d447c" containerID="a8d466b20daa4313c2971fa338639a04797f99e64c8b339dae34521368fce161" exitCode=137 Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.922077 4858 generic.go:334] "Generic (PLEG): container finished" podID="74707222-b7c2-4226-8df2-2459cb7d447c" containerID="7fd1c7a01ca4fad5ce789f8d407a634918a206fe96e96938159bc3e46f13b444" exitCode=137 Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.922099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6544888b69-dvcr4" event={"ID":"74707222-b7c2-4226-8df2-2459cb7d447c","Type":"ContainerDied","Data":"a8d466b20daa4313c2971fa338639a04797f99e64c8b339dae34521368fce161"} Jan 27 20:30:16 crc kubenswrapper[4858]: I0127 20:30:16.922131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6544888b69-dvcr4" event={"ID":"74707222-b7c2-4226-8df2-2459cb7d447c","Type":"ContainerDied","Data":"7fd1c7a01ca4fad5ce789f8d407a634918a206fe96e96938159bc3e46f13b444"} Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.045073 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.081935 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-757c454848-p8szs"] Jan 27 20:30:17 crc kubenswrapper[4858]: W0127 20:30:17.086510 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953b3bbd_e4eb_48b1_afd8_6ac2b9050c06.slice/crio-0cc5d8fe526044eb1f0c3315696572da373d15304b2de7ffe46f094435713c65 WatchSource:0}: Error finding container 0cc5d8fe526044eb1f0c3315696572da373d15304b2de7ffe46f094435713c65: Status 404 returned error can't find the container with id 0cc5d8fe526044eb1f0c3315696572da373d15304b2de7ffe46f094435713c65 Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.158309 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njj25\" (UniqueName: \"kubernetes.io/projected/74707222-b7c2-4226-8df2-2459cb7d447c-kube-api-access-njj25\") pod \"74707222-b7c2-4226-8df2-2459cb7d447c\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.165034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74707222-b7c2-4226-8df2-2459cb7d447c-horizon-secret-key\") pod \"74707222-b7c2-4226-8df2-2459cb7d447c\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.165100 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74707222-b7c2-4226-8df2-2459cb7d447c-logs\") pod \"74707222-b7c2-4226-8df2-2459cb7d447c\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.165167 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-scripts\") pod \"74707222-b7c2-4226-8df2-2459cb7d447c\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.165298 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-config-data\") pod \"74707222-b7c2-4226-8df2-2459cb7d447c\" (UID: \"74707222-b7c2-4226-8df2-2459cb7d447c\") " Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.166754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74707222-b7c2-4226-8df2-2459cb7d447c-logs" (OuterVolumeSpecName: "logs") pod "74707222-b7c2-4226-8df2-2459cb7d447c" (UID: "74707222-b7c2-4226-8df2-2459cb7d447c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.173829 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74707222-b7c2-4226-8df2-2459cb7d447c-kube-api-access-njj25" (OuterVolumeSpecName: "kube-api-access-njj25") pod "74707222-b7c2-4226-8df2-2459cb7d447c" (UID: "74707222-b7c2-4226-8df2-2459cb7d447c"). InnerVolumeSpecName "kube-api-access-njj25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.210180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74707222-b7c2-4226-8df2-2459cb7d447c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74707222-b7c2-4226-8df2-2459cb7d447c" (UID: "74707222-b7c2-4226-8df2-2459cb7d447c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.219091 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-config-data" (OuterVolumeSpecName: "config-data") pod "74707222-b7c2-4226-8df2-2459cb7d447c" (UID: "74707222-b7c2-4226-8df2-2459cb7d447c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.222343 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-scripts" (OuterVolumeSpecName: "scripts") pod "74707222-b7c2-4226-8df2-2459cb7d447c" (UID: "74707222-b7c2-4226-8df2-2459cb7d447c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.253740 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9655b799f-5tbtb"] Jan 27 20:30:17 crc kubenswrapper[4858]: E0127 20:30:17.279200 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f3aa248_8818_4e60_9946_16d08aecd5ab.slice/crio-83cd37394ce9dab3cd56fcdd3d1478d30a2871b2408745e4596d1247af1202eb.scope\": RecentStats: unable to find data in memory cache]" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.292361 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njj25\" (UniqueName: \"kubernetes.io/projected/74707222-b7c2-4226-8df2-2459cb7d447c-kube-api-access-njj25\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.292398 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74707222-b7c2-4226-8df2-2459cb7d447c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.292408 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74707222-b7c2-4226-8df2-2459cb7d447c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.292422 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.292432 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74707222-b7c2-4226-8df2-2459cb7d447c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.296157 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.733983 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.982158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9b655566-275d7" event={"ID":"598238a6-e427-47db-b460-298627190cce","Type":"ContainerStarted","Data":"849ea60819fb059ffb74a55db53d96e2fa27ea5fc8e1479b0751bbca2e0a9f85"} Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.991695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757c454848-p8szs" event={"ID":"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06","Type":"ContainerStarted","Data":"3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d"} Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.991751 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757c454848-p8szs" event={"ID":"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06","Type":"ContainerStarted","Data":"0cc5d8fe526044eb1f0c3315696572da373d15304b2de7ffe46f094435713c65"} Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.994456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6544888b69-dvcr4" event={"ID":"74707222-b7c2-4226-8df2-2459cb7d447c","Type":"ContainerDied","Data":"d22ee26562f6f5914b8bf2633b3e642784f89cb367cdda4fbd884ca946448538"} Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.994564 4858 scope.go:117] "RemoveContainer" containerID="a8d466b20daa4313c2971fa338639a04797f99e64c8b339dae34521368fce161" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.994788 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:30:17 crc kubenswrapper[4858]: I0127 20:30:17.995965 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.005279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3868cd-3f54-4d3a-84df-4a7ae88cd302","Type":"ContainerStarted","Data":"6b44c1f1a8186bd2c72dc49fb61a6ddd8434b22f5ead0212e00577a89aa3a415"} Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.007958 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" event={"ID":"cee80805-8e2b-44cd-8a95-8d4cf21effcd","Type":"ContainerStarted","Data":"50d1f444878d784a29c1a232a50356fe61e400e278103e65afa4da821c8e9835"} Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.007990 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" event={"ID":"cee80805-8e2b-44cd-8a95-8d4cf21effcd","Type":"ContainerStarted","Data":"6416c62a7704bb999c112b791338bb988f62082ca518172cf2cc03bb31a24d31"} Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.022234 4858 generic.go:334] "Generic (PLEG): container finished" podID="374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" containerID="b0776e4d6aef7cd5c170469297ac5882e57537b3fb5851cb5958755e41ce493f" exitCode=0 Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.022417 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" event={"ID":"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b","Type":"ContainerDied","Data":"b0776e4d6aef7cd5c170469297ac5882e57537b3fb5851cb5958755e41ce493f"} Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.025187 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.026058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5598668497-6nzrb" event={"ID":"9bb04320-907b-4d35-9c41-ea828a779f5d","Type":"ContainerStarted","Data":"c0c7ec53ee88ce8c51131b92aac644fb1c474e2a0a6bb3f031816c254b442164"} Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.057541 4858 generic.go:334] "Generic (PLEG): container finished" podID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerID="5c7424910c6edc4b7b3f20ed691ec2f741c8afc03400ef2e786ac2a9126ea152" exitCode=137 Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.057592 4858 generic.go:334] "Generic (PLEG): container finished" podID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerID="83cd37394ce9dab3cd56fcdd3d1478d30a2871b2408745e4596d1247af1202eb" exitCode=137 Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.057620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678cc97f57-w9dmc" event={"ID":"0f3aa248-8818-4e60-9946-16d08aecd5ab","Type":"ContainerDied","Data":"5c7424910c6edc4b7b3f20ed691ec2f741c8afc03400ef2e786ac2a9126ea152"} Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.057658 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678cc97f57-w9dmc" event={"ID":"0f3aa248-8818-4e60-9946-16d08aecd5ab","Type":"ContainerDied","Data":"83cd37394ce9dab3cd56fcdd3d1478d30a2871b2408745e4596d1247af1202eb"} Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.471712 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.541921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-config-data\") pod \"0f3aa248-8818-4e60-9946-16d08aecd5ab\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.543292 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.580453 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-config-data" (OuterVolumeSpecName: "config-data") pod "0f3aa248-8818-4e60-9946-16d08aecd5ab" (UID: "0f3aa248-8818-4e60-9946-16d08aecd5ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.645298 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-scripts\") pod \"0f3aa248-8818-4e60-9946-16d08aecd5ab\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.645364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f3aa248-8818-4e60-9946-16d08aecd5ab-logs\") pod \"0f3aa248-8818-4e60-9946-16d08aecd5ab\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.646600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx8v4\" (UniqueName: \"kubernetes.io/projected/0f3aa248-8818-4e60-9946-16d08aecd5ab-kube-api-access-kx8v4\") pod \"0f3aa248-8818-4e60-9946-16d08aecd5ab\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.647310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f3aa248-8818-4e60-9946-16d08aecd5ab-horizon-secret-key\") pod \"0f3aa248-8818-4e60-9946-16d08aecd5ab\" (UID: \"0f3aa248-8818-4e60-9946-16d08aecd5ab\") " Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.648093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f3aa248-8818-4e60-9946-16d08aecd5ab-logs" (OuterVolumeSpecName: "logs") pod "0f3aa248-8818-4e60-9946-16d08aecd5ab" (UID: "0f3aa248-8818-4e60-9946-16d08aecd5ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.650329 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f3aa248-8818-4e60-9946-16d08aecd5ab-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.650356 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.657884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3aa248-8818-4e60-9946-16d08aecd5ab-kube-api-access-kx8v4" (OuterVolumeSpecName: "kube-api-access-kx8v4") pod "0f3aa248-8818-4e60-9946-16d08aecd5ab" (UID: "0f3aa248-8818-4e60-9946-16d08aecd5ab"). InnerVolumeSpecName "kube-api-access-kx8v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.670511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f3aa248-8818-4e60-9946-16d08aecd5ab-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0f3aa248-8818-4e60-9946-16d08aecd5ab" (UID: "0f3aa248-8818-4e60-9946-16d08aecd5ab"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.700954 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-scripts" (OuterVolumeSpecName: "scripts") pod "0f3aa248-8818-4e60-9946-16d08aecd5ab" (UID: "0f3aa248-8818-4e60-9946-16d08aecd5ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.727755 4858 scope.go:117] "RemoveContainer" containerID="7fd1c7a01ca4fad5ce789f8d407a634918a206fe96e96938159bc3e46f13b444" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.753955 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx8v4\" (UniqueName: \"kubernetes.io/projected/0f3aa248-8818-4e60-9946-16d08aecd5ab-kube-api-access-kx8v4\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.753994 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f3aa248-8818-4e60-9946-16d08aecd5ab-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.754007 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f3aa248-8818-4e60-9946-16d08aecd5ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.783707 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57556bc8bb-j4fhs" Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.889073 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f7fd77bcb-cxmbt"] Jan 27 20:30:18 crc kubenswrapper[4858]: I0127 20:30:18.911783 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.065093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-nb\") pod \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.065614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-svc\") pod \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.065665 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-sb\") pod \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.065715 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc5sx\" (UniqueName: \"kubernetes.io/projected/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-kube-api-access-jc5sx\") pod \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.065770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-swift-storage-0\") pod \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.065865 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-config\") pod \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\" (UID: \"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b\") " Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.084697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-kube-api-access-jc5sx" (OuterVolumeSpecName: "kube-api-access-jc5sx") pod "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" (UID: "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b"). InnerVolumeSpecName "kube-api-access-jc5sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.094787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" event={"ID":"374dd29d-cd44-4be0-ab2b-6c1ff4419e8b","Type":"ContainerDied","Data":"441578b30fd1bbe1b0208582d2d6c6043179bcef22b3b61c82962c207a7cb37c"} Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.095316 4858 scope.go:117] "RemoveContainer" containerID="b0776e4d6aef7cd5c170469297ac5882e57537b3fb5851cb5958755e41ce493f" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.096794 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.104814 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-678cc97f57-w9dmc" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.105333 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-678cc97f57-w9dmc" event={"ID":"0f3aa248-8818-4e60-9946-16d08aecd5ab","Type":"ContainerDied","Data":"b2c79fbaa73acb1898d7bb08680b3e91764228da39b99f0f46f242df04951958"} Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.110071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f9b655566-275d7" event={"ID":"598238a6-e427-47db-b460-298627190cce","Type":"ContainerStarted","Data":"deb55e75b1a9f9e9bbb0adf7d91a2dc5c451453a991ef836e4f2a387c00061e0"} Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.111487 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.111520 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.119437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3868cd-3f54-4d3a-84df-4a7ae88cd302","Type":"ContainerStarted","Data":"29898b21dc297a9cd44e116e4c8f79a4d57dc4fd75d0b86cf10b654d0cd26599"} Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.129051 4858 generic.go:334] "Generic (PLEG): container finished" podID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerID="50d1f444878d784a29c1a232a50356fe61e400e278103e65afa4da821c8e9835" exitCode=0 Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.129200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" event={"ID":"cee80805-8e2b-44cd-8a95-8d4cf21effcd","Type":"ContainerDied","Data":"50d1f444878d784a29c1a232a50356fe61e400e278103e65afa4da821c8e9835"} Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.143451 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f7fd77bcb-cxmbt" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon-log" containerID="cri-o://51a7810e6ed3102dd208860bde7beb41d43fac91b3815b7ecdc22e5d766e5ed9" gracePeriod=30 Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.143658 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f7fd77bcb-cxmbt" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" containerID="cri-o://1fb5d262371a89abeed10a1670e4080ebaeb89f0f9b926b587ffc3cf13b2dccc" gracePeriod=30 Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.168175 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc5sx\" (UniqueName: \"kubernetes.io/projected/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-kube-api-access-jc5sx\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.172124 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f9b655566-275d7" podStartSLOduration=5.172084755 podStartE2EDuration="5.172084755s" podCreationTimestamp="2026-01-27 20:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:19.167183574 +0000 UTC m=+1363.874999300" watchObservedRunningTime="2026-01-27 20:30:19.172084755 +0000 UTC m=+1363.879900461" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.183693 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.231823 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-678cc97f57-w9dmc"] Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.249262 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-678cc97f57-w9dmc"] Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.393252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" (UID: "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.480122 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.673980 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" (UID: "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.690802 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.754537 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" (UID: "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.760428 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" (UID: "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.777999 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-config" (OuterVolumeSpecName: "config") pod "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" (UID: "374dd29d-cd44-4be0-ab2b-6c1ff4419e8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.792903 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.792943 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:19 crc kubenswrapper[4858]: I0127 20:30:19.792955 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.115333 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" path="/var/lib/kubelet/pods/0f3aa248-8818-4e60-9946-16d08aecd5ab/volumes" Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.155003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757c454848-p8szs" event={"ID":"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06","Type":"ContainerStarted","Data":"d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75"} Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.155487 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.155536 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.165170 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerID="aa84b43dd39168f5465057da4ffc0cf125da3e976c1b56bc5fb7f19c3ad83c36" exitCode=0 Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.165272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerDied","Data":"aa84b43dd39168f5465057da4ffc0cf125da3e976c1b56bc5fb7f19c3ad83c36"} Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.168339 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw"] Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.178517 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" event={"ID":"cee80805-8e2b-44cd-8a95-8d4cf21effcd","Type":"ContainerStarted","Data":"ffd0e60daa9864a08f51d8ad17007f61b10112610052154c9ca2e049fabf4c14"} Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.178956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.182977 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9bcc9b8c-rgnqw"] Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.206104 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-757c454848-p8szs" podStartSLOduration=5.206078563 podStartE2EDuration="5.206078563s" podCreationTimestamp="2026-01-27 20:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:20.194196921 +0000 UTC m=+1364.902012647" watchObservedRunningTime="2026-01-27 20:30:20.206078563 +0000 UTC m=+1364.913894269" Jan 27 20:30:20 crc kubenswrapper[4858]: I0127 20:30:20.231343 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" podStartSLOduration=5.231319629 podStartE2EDuration="5.231319629s" podCreationTimestamp="2026-01-27 20:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:20.229441585 +0000 UTC m=+1364.937257311" watchObservedRunningTime="2026-01-27 20:30:20.231319629 +0000 UTC m=+1364.939135335" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.095071 4858 scope.go:117] "RemoveContainer" containerID="5c7424910c6edc4b7b3f20ed691ec2f741c8afc03400ef2e786ac2a9126ea152" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.198203 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerID="1fb5d262371a89abeed10a1670e4080ebaeb89f0f9b926b587ffc3cf13b2dccc" exitCode=0 Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.198286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7fd77bcb-cxmbt" event={"ID":"2ec05cb1-c40c-48cb-ba64-9321abb6287c","Type":"ContainerDied","Data":"1fb5d262371a89abeed10a1670e4080ebaeb89f0f9b926b587ffc3cf13b2dccc"} Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.209937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b795cea-c66d-4bca-8e9c-7da6cf08adf8","Type":"ContainerDied","Data":"c8fceb6ab0163bb450e815aa6d43290c4a235f3fdfe58b2316e18554e03ae1ce"} Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.210007 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8fceb6ab0163bb450e815aa6d43290c4a235f3fdfe58b2316e18554e03ae1ce" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.215922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3868cd-3f54-4d3a-84df-4a7ae88cd302","Type":"ContainerStarted","Data":"3de537b644238319e501d41b35a33a67e7eb03882951f6f946567ba192794112"} Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.217281 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api-log" containerID="cri-o://29898b21dc297a9cd44e116e4c8f79a4d57dc4fd75d0b86cf10b654d0cd26599" gracePeriod=30 Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.217612 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api" containerID="cri-o://3de537b644238319e501d41b35a33a67e7eb03882951f6f946567ba192794112" gracePeriod=30 Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.217731 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.255928 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.255899955 podStartE2EDuration="6.255899955s" podCreationTimestamp="2026-01-27 20:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:21.252618361 +0000 UTC m=+1365.960434087" watchObservedRunningTime="2026-01-27 20:30:21.255899955 +0000 UTC m=+1365.963715661" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.284085 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.448607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-sg-core-conf-yaml\") pod \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.449249 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-config-data\") pod \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.449382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-run-httpd\") pod \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.449414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-scripts\") pod \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.449467 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzcl\" (UniqueName: \"kubernetes.io/projected/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-kube-api-access-6fzcl\") pod \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.449536 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-log-httpd\") pod \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.449623 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-combined-ca-bundle\") pod \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\" (UID: \"7b795cea-c66d-4bca-8e9c-7da6cf08adf8\") " Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.450843 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b795cea-c66d-4bca-8e9c-7da6cf08adf8" (UID: "7b795cea-c66d-4bca-8e9c-7da6cf08adf8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.451192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b795cea-c66d-4bca-8e9c-7da6cf08adf8" (UID: "7b795cea-c66d-4bca-8e9c-7da6cf08adf8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.451728 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.451751 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.457712 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-scripts" (OuterVolumeSpecName: "scripts") pod "7b795cea-c66d-4bca-8e9c-7da6cf08adf8" (UID: "7b795cea-c66d-4bca-8e9c-7da6cf08adf8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.458515 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-kube-api-access-6fzcl" (OuterVolumeSpecName: "kube-api-access-6fzcl") pod "7b795cea-c66d-4bca-8e9c-7da6cf08adf8" (UID: "7b795cea-c66d-4bca-8e9c-7da6cf08adf8"). InnerVolumeSpecName "kube-api-access-6fzcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.522275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b795cea-c66d-4bca-8e9c-7da6cf08adf8" (UID: "7b795cea-c66d-4bca-8e9c-7da6cf08adf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.533214 4858 scope.go:117] "RemoveContainer" containerID="83cd37394ce9dab3cd56fcdd3d1478d30a2871b2408745e4596d1247af1202eb" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.537322 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b795cea-c66d-4bca-8e9c-7da6cf08adf8" (UID: "7b795cea-c66d-4bca-8e9c-7da6cf08adf8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.553857 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.554146 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.554237 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.554368 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzcl\" (UniqueName: \"kubernetes.io/projected/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-kube-api-access-6fzcl\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.559810 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-config-data" (OuterVolumeSpecName: "config-data") pod "7b795cea-c66d-4bca-8e9c-7da6cf08adf8" (UID: "7b795cea-c66d-4bca-8e9c-7da6cf08adf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:21 crc kubenswrapper[4858]: I0127 20:30:21.656450 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b795cea-c66d-4bca-8e9c-7da6cf08adf8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.022414 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69dcf58cf6-v246z"] Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023260 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="sg-core" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023280 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="sg-core" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023296 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023303 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023318 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="ceilometer-notification-agent" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023326 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="ceilometer-notification-agent" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023336 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023342 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon-log" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023353 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" containerName="init" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023359 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" containerName="init" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023375 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023382 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023392 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="proxy-httpd" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023398 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="proxy-httpd" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.023411 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023418 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023699 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" containerName="init" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023715 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="proxy-httpd" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023726 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023734 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="ceilometer-notification-agent" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023747 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" containerName="sg-core" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023757 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" containerName="horizon-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023793 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.023817 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3aa248-8818-4e60-9946-16d08aecd5ab" containerName="horizon" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.026301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.040693 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.040983 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.071980 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69dcf58cf6-v246z"] Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.126974 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374dd29d-cd44-4be0-ab2b-6c1ff4419e8b" path="/var/lib/kubelet/pods/374dd29d-cd44-4be0-ab2b-6c1ff4419e8b/volumes" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.184668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-public-tls-certs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.185227 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9651b951-4ad0-42ae-85fb-176da5b8ccdf-logs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.185286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8t5b\" (UniqueName: \"kubernetes.io/projected/9651b951-4ad0-42ae-85fb-176da5b8ccdf-kube-api-access-q8t5b\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.185327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-combined-ca-bundle\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.185419 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-config-data\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.185669 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-config-data-custom\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.185709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-internal-tls-certs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.245961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c","Type":"ContainerStarted","Data":"d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1"} Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.266498 4858 generic.go:334] "Generic (PLEG): container finished" podID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerID="3de537b644238319e501d41b35a33a67e7eb03882951f6f946567ba192794112" exitCode=0 Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.266568 4858 generic.go:334] "Generic (PLEG): container finished" podID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerID="29898b21dc297a9cd44e116e4c8f79a4d57dc4fd75d0b86cf10b654d0cd26599" exitCode=143 Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.266661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3868cd-3f54-4d3a-84df-4a7ae88cd302","Type":"ContainerDied","Data":"3de537b644238319e501d41b35a33a67e7eb03882951f6f946567ba192794112"} Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.266704 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3868cd-3f54-4d3a-84df-4a7ae88cd302","Type":"ContainerDied","Data":"29898b21dc297a9cd44e116e4c8f79a4d57dc4fd75d0b86cf10b654d0cd26599"} Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.275429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" event={"ID":"9927d309-f818-4163-9659-f7b6a060960e","Type":"ContainerStarted","Data":"be0669fbfab8017fb6baaac330df130e97c12e66e10cc459d4a7ea9b0f68af9e"} Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.287433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-public-tls-certs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.287498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9651b951-4ad0-42ae-85fb-176da5b8ccdf-logs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.287582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8t5b\" (UniqueName: \"kubernetes.io/projected/9651b951-4ad0-42ae-85fb-176da5b8ccdf-kube-api-access-q8t5b\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.287619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-combined-ca-bundle\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.287700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-config-data\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.287834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-config-data-custom\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.290702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9651b951-4ad0-42ae-85fb-176da5b8ccdf-logs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.296135 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.298285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5598668497-6nzrb" event={"ID":"9bb04320-907b-4d35-9c41-ea828a779f5d","Type":"ContainerStarted","Data":"565f89aeb0029e3cea57db6866f6a08bb3708380dcb2def3ab6f2e022123b1ae"} Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.304320 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-public-tls-certs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.304428 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-config-data-custom\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.306343 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-config-data\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.287858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-internal-tls-certs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.308730 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-internal-tls-certs\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.309315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9651b951-4ad0-42ae-85fb-176da5b8ccdf-combined-ca-bundle\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.311155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8t5b\" (UniqueName: \"kubernetes.io/projected/9651b951-4ad0-42ae-85fb-176da5b8ccdf-kube-api-access-q8t5b\") pod \"barbican-api-69dcf58cf6-v246z\" (UID: \"9651b951-4ad0-42ae-85fb-176da5b8ccdf\") " pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.401466 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.437086 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.475034 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.500943 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.519875 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.527818 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.527844 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api" Jan 27 20:30:22 crc kubenswrapper[4858]: E0127 20:30:22.527886 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.527895 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.529202 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api-log" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.529246 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" containerName="cinder-api" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.522145 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data\") pod \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.536268 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-combined-ca-bundle\") pod \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.536394 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdp7x\" (UniqueName: \"kubernetes.io/projected/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-kube-api-access-rdp7x\") pod \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.536484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data-custom\") pod \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.536572 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-etc-machine-id\") pod \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.536608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-logs\") pod \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.536643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-scripts\") pod \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\" (UID: \"3c3868cd-3f54-4d3a-84df-4a7ae88cd302\") " Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.542093 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.542686 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3c3868cd-3f54-4d3a-84df-4a7ae88cd302" (UID: "3c3868cd-3f54-4d3a-84df-4a7ae88cd302"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.543121 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-logs" (OuterVolumeSpecName: "logs") pod "3c3868cd-3f54-4d3a-84df-4a7ae88cd302" (UID: "3c3868cd-3f54-4d3a-84df-4a7ae88cd302"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.546621 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.546807 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.576936 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-scripts" (OuterVolumeSpecName: "scripts") pod "3c3868cd-3f54-4d3a-84df-4a7ae88cd302" (UID: "3c3868cd-3f54-4d3a-84df-4a7ae88cd302"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.577108 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c3868cd-3f54-4d3a-84df-4a7ae88cd302" (UID: "3c3868cd-3f54-4d3a-84df-4a7ae88cd302"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.579912 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-kube-api-access-rdp7x" (OuterVolumeSpecName: "kube-api-access-rdp7x") pod "3c3868cd-3f54-4d3a-84df-4a7ae88cd302" (UID: "3c3868cd-3f54-4d3a-84df-4a7ae88cd302"). InnerVolumeSpecName "kube-api-access-rdp7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.610373 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.616060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3868cd-3f54-4d3a-84df-4a7ae88cd302" (UID: "3c3868cd-3f54-4d3a-84df-4a7ae88cd302"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.639835 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data" (OuterVolumeSpecName: "config-data") pod "3c3868cd-3f54-4d3a-84df-4a7ae88cd302" (UID: "3c3868cd-3f54-4d3a-84df-4a7ae88cd302"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.640764 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-scripts\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.640828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-run-httpd\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.640877 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-config-data\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.640902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.640969 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhbb\" (UniqueName: \"kubernetes.io/projected/338280a3-8a99-4294-81bb-bff485d21e74-kube-api-access-2dhbb\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641008 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-log-httpd\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641229 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641247 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641258 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdp7x\" (UniqueName: \"kubernetes.io/projected/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-kube-api-access-rdp7x\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641269 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641279 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641289 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.641298 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3868cd-3f54-4d3a-84df-4a7ae88cd302-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.743532 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.743608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhbb\" (UniqueName: \"kubernetes.io/projected/338280a3-8a99-4294-81bb-bff485d21e74-kube-api-access-2dhbb\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.743657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-log-httpd\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.743698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.743765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-scripts\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.743784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-run-httpd\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.743824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-config-data\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.748613 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.749366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-run-httpd\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.749606 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-log-httpd\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.753485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-config-data\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.753731 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f7fd77bcb-cxmbt" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.754459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.760658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-scripts\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.770231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhbb\" (UniqueName: \"kubernetes.io/projected/338280a3-8a99-4294-81bb-bff485d21e74-kube-api-access-2dhbb\") pod \"ceilometer-0\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " pod="openstack/ceilometer-0" Jan 27 20:30:22 crc kubenswrapper[4858]: I0127 20:30:22.910299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.085516 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69dcf58cf6-v246z"] Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.306673 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.309750 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.342791 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5598668497-6nzrb" event={"ID":"9bb04320-907b-4d35-9c41-ea828a779f5d","Type":"ContainerStarted","Data":"a9744cc5b1378e92c830caae988c2631bb57c696e7c83513ea38f44cebaf72e0"} Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.360803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dcf58cf6-v246z" event={"ID":"9651b951-4ad0-42ae-85fb-176da5b8ccdf","Type":"ContainerStarted","Data":"0618e7ad870dcb33edaaef5a08a93cd5a358bc535c370e94f73f2c6f7738b619"} Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.375046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c","Type":"ContainerStarted","Data":"76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304"} Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.387442 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5598668497-6nzrb" podStartSLOduration=4.040996501 podStartE2EDuration="8.387392077s" podCreationTimestamp="2026-01-27 20:30:15 +0000 UTC" firstStartedPulling="2026-01-27 20:30:16.806762522 +0000 UTC m=+1361.514578228" lastFinishedPulling="2026-01-27 20:30:21.153158098 +0000 UTC m=+1365.860973804" observedRunningTime="2026-01-27 20:30:23.371894251 +0000 UTC m=+1368.079709967" watchObservedRunningTime="2026-01-27 20:30:23.387392077 +0000 UTC m=+1368.095207783" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.391366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3868cd-3f54-4d3a-84df-4a7ae88cd302","Type":"ContainerDied","Data":"6b44c1f1a8186bd2c72dc49fb61a6ddd8434b22f5ead0212e00577a89aa3a415"} Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.391642 4858 scope.go:117] "RemoveContainer" containerID="3de537b644238319e501d41b35a33a67e7eb03882951f6f946567ba192794112" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.391883 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.443418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" event={"ID":"9927d309-f818-4163-9659-f7b6a060960e","Type":"ContainerStarted","Data":"c0a6b9f2c3cae9e17582722f88f6d03df595955203e3e9f0290bc6594828e36f"} Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.456713 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.861584434 podStartE2EDuration="8.456681512s" podCreationTimestamp="2026-01-27 20:30:15 +0000 UTC" firstStartedPulling="2026-01-27 20:30:16.443418552 +0000 UTC m=+1361.151234248" lastFinishedPulling="2026-01-27 20:30:19.03851562 +0000 UTC m=+1363.746331326" observedRunningTime="2026-01-27 20:30:23.445222522 +0000 UTC m=+1368.153038238" watchObservedRunningTime="2026-01-27 20:30:23.456681512 +0000 UTC m=+1368.164497218" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.485180 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-58f4598744-qn5jn" podStartSLOduration=4.099713332 podStartE2EDuration="8.485155592s" podCreationTimestamp="2026-01-27 20:30:15 +0000 UTC" firstStartedPulling="2026-01-27 20:30:16.770853228 +0000 UTC m=+1361.478668934" lastFinishedPulling="2026-01-27 20:30:21.156295488 +0000 UTC m=+1365.864111194" observedRunningTime="2026-01-27 20:30:23.482167576 +0000 UTC m=+1368.189983282" watchObservedRunningTime="2026-01-27 20:30:23.485155592 +0000 UTC m=+1368.192971298" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.487988 4858 scope.go:117] "RemoveContainer" containerID="29898b21dc297a9cd44e116e4c8f79a4d57dc4fd75d0b86cf10b654d0cd26599" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.513004 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.524830 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.538866 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.540791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.548238 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.548615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.554005 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.566919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567049 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk26x\" (UniqueName: \"kubernetes.io/projected/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-kube-api-access-vk26x\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-logs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-config-data-custom\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567173 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-scripts\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567191 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.567211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-config-data\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.572878 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.668855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk26x\" (UniqueName: \"kubernetes.io/projected/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-kube-api-access-vk26x\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.668910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.669009 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-logs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.669039 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-config-data-custom\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.669078 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.669096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-scripts\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.669124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-config-data\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.669163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.669236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.670931 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-logs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.674078 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.687106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-scripts\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.688256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.694267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-config-data-custom\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.694516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.695073 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk26x\" (UniqueName: \"kubernetes.io/projected/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-kube-api-access-vk26x\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.704526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-config-data\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.707274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bdfd71-369f-46e8-be09-4e5b5bb22d1a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"29bdfd71-369f-46e8-be09-4e5b5bb22d1a\") " pod="openstack/cinder-api-0" Jan 27 20:30:23 crc kubenswrapper[4858]: I0127 20:30:23.894467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 20:30:24 crc kubenswrapper[4858]: I0127 20:30:24.091656 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3868cd-3f54-4d3a-84df-4a7ae88cd302" path="/var/lib/kubelet/pods/3c3868cd-3f54-4d3a-84df-4a7ae88cd302/volumes" Jan 27 20:30:24 crc kubenswrapper[4858]: I0127 20:30:24.092718 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b795cea-c66d-4bca-8e9c-7da6cf08adf8" path="/var/lib/kubelet/pods/7b795cea-c66d-4bca-8e9c-7da6cf08adf8/volumes" Jan 27 20:30:24 crc kubenswrapper[4858]: I0127 20:30:24.470406 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dcf58cf6-v246z" event={"ID":"9651b951-4ad0-42ae-85fb-176da5b8ccdf","Type":"ContainerStarted","Data":"9be911c4a09320917ad59223c2ebc93a9724390a78596098ba82786f4c48124c"} Jan 27 20:30:24 crc kubenswrapper[4858]: I0127 20:30:24.506942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerStarted","Data":"45eb93a9225d6d3a18f9a07a442e078603fe06282db28aacbfab9e15c29b04bb"} Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.081801 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.532414 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerStarted","Data":"998018ad68c3e321ad5ee5240e7b96c47151dd626b1e2a865bab3f619aba0d21"} Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.534482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29bdfd71-369f-46e8-be09-4e5b5bb22d1a","Type":"ContainerStarted","Data":"8a91ebf6e09e8ebe3285972b4e12ac46412c72893b2d6420bfd9e48038a820d2"} Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.547059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69dcf58cf6-v246z" event={"ID":"9651b951-4ad0-42ae-85fb-176da5b8ccdf","Type":"ContainerStarted","Data":"926f25c572174565cb016809df02a77f468cc5d6020960b7339187904bcdffb5"} Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.547317 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.547365 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.602668 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69dcf58cf6-v246z" podStartSLOduration=4.6026474109999995 podStartE2EDuration="4.602647411s" podCreationTimestamp="2026-01-27 20:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:25.59357 +0000 UTC m=+1370.301385706" watchObservedRunningTime="2026-01-27 20:30:25.602647411 +0000 UTC m=+1370.310463117" Jan 27 20:30:25 crc kubenswrapper[4858]: I0127 20:30:25.830674 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.178635 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c77755cc5-ffvng" Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.280897 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.299234 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f9dfd55dd-q9n8v"] Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.299493 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f9dfd55dd-q9n8v" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-api" containerID="cri-o://e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08" gracePeriod=30 Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.299712 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f9dfd55dd-q9n8v" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-httpd" containerID="cri-o://739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f" gracePeriod=30 Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.415152 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d8667ddb9-dmdvl"] Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.415439 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" podUID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerName="dnsmasq-dns" containerID="cri-o://082aa311904c1f0750e52f3098be55a0f8a014c2c5ce9bb0384e3bfcdc163eef" gracePeriod=10 Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.593984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29bdfd71-369f-46e8-be09-4e5b5bb22d1a","Type":"ContainerStarted","Data":"d19d22bf6cdc855c18c811fdc104328e50a9a2fcd9212e6cdc5b3c96e2329841"} Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.629745 4858 generic.go:334] "Generic (PLEG): container finished" podID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerID="082aa311904c1f0750e52f3098be55a0f8a014c2c5ce9bb0384e3bfcdc163eef" exitCode=0 Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.629813 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" event={"ID":"d76a7fd9-c6ae-468d-814d-4340d0312bcb","Type":"ContainerDied","Data":"082aa311904c1f0750e52f3098be55a0f8a014c2c5ce9bb0384e3bfcdc163eef"} Jan 27 20:30:26 crc kubenswrapper[4858]: I0127 20:30:26.694713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerStarted","Data":"890c582e5cf0d5588add4cfb427399d9e59ae0f97f5f21735b8b2ff6155b3443"} Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.429415 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.608756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-nb\") pod \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.608927 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8wtw\" (UniqueName: \"kubernetes.io/projected/d76a7fd9-c6ae-468d-814d-4340d0312bcb-kube-api-access-d8wtw\") pod \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.610763 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-svc\") pod \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.610841 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-config\") pod \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.610942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-swift-storage-0\") pod \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.611507 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-sb\") pod \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\" (UID: \"d76a7fd9-c6ae-468d-814d-4340d0312bcb\") " Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.641653 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76a7fd9-c6ae-468d-814d-4340d0312bcb-kube-api-access-d8wtw" (OuterVolumeSpecName: "kube-api-access-d8wtw") pod "d76a7fd9-c6ae-468d-814d-4340d0312bcb" (UID: "d76a7fd9-c6ae-468d-814d-4340d0312bcb"). InnerVolumeSpecName "kube-api-access-d8wtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.718106 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8wtw\" (UniqueName: \"kubernetes.io/projected/d76a7fd9-c6ae-468d-814d-4340d0312bcb-kube-api-access-d8wtw\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.728674 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d76a7fd9-c6ae-468d-814d-4340d0312bcb" (UID: "d76a7fd9-c6ae-468d-814d-4340d0312bcb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.743026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d76a7fd9-c6ae-468d-814d-4340d0312bcb" (UID: "d76a7fd9-c6ae-468d-814d-4340d0312bcb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.751919 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d76a7fd9-c6ae-468d-814d-4340d0312bcb" (UID: "d76a7fd9-c6ae-468d-814d-4340d0312bcb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.755449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerStarted","Data":"e35048be77a102b455c2b021f6947f7873f1a6ae3b276ee7fdc26580b43fc21e"} Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.769807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-config" (OuterVolumeSpecName: "config") pod "d76a7fd9-c6ae-468d-814d-4340d0312bcb" (UID: "d76a7fd9-c6ae-468d-814d-4340d0312bcb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.769951 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.772217 4858 generic.go:334] "Generic (PLEG): container finished" podID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerID="739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f" exitCode=0 Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.772276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f9dfd55dd-q9n8v" event={"ID":"3d6da88b-9fe4-4e4f-afdf-f8dccf939679","Type":"ContainerDied","Data":"739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f"} Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.777993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d76a7fd9-c6ae-468d-814d-4340d0312bcb" (UID: "d76a7fd9-c6ae-468d-814d-4340d0312bcb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.778655 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" event={"ID":"d76a7fd9-c6ae-468d-814d-4340d0312bcb","Type":"ContainerDied","Data":"cc27a03eef5aee0267efa78e92ff2a000871ec149dbda9b3aac14ef68f4cc030"} Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.778713 4858 scope.go:117] "RemoveContainer" containerID="082aa311904c1f0750e52f3098be55a0f8a014c2c5ce9bb0384e3bfcdc163eef" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.778927 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d8667ddb9-dmdvl" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.801847 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.801822382 podStartE2EDuration="4.801822382s" podCreationTimestamp="2026-01-27 20:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:27.797210009 +0000 UTC m=+1372.505025725" watchObservedRunningTime="2026-01-27 20:30:27.801822382 +0000 UTC m=+1372.509638088" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.819917 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.819949 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.819960 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.819970 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.819980 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d76a7fd9-c6ae-468d-814d-4340d0312bcb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.875339 4858 scope.go:117] "RemoveContainer" containerID="e836f5bc5c93f2e6ffbc44231a52e811850d0a0d575df0dda1ea9f7ece0325c8" Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.885392 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d8667ddb9-dmdvl"] Jan 27 20:30:27 crc kubenswrapper[4858]: I0127 20:30:27.895991 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d8667ddb9-dmdvl"] Jan 27 20:30:28 crc kubenswrapper[4858]: I0127 20:30:28.072145 4858 scope.go:117] "RemoveContainer" containerID="4fe2e17cab0c4bc7715a3d67286a23a4375609195c4d9d669b3262a2b09ce1d8" Jan 27 20:30:28 crc kubenswrapper[4858]: I0127 20:30:28.102640 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" path="/var/lib/kubelet/pods/d76a7fd9-c6ae-468d-814d-4340d0312bcb/volumes" Jan 27 20:30:28 crc kubenswrapper[4858]: I0127 20:30:28.841212 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29bdfd71-369f-46e8-be09-4e5b5bb22d1a","Type":"ContainerStarted","Data":"4ba29899e7d04f4718d91bff330a18c63c5fbac89e7273574ac21bba4e445331"} Jan 27 20:30:28 crc kubenswrapper[4858]: I0127 20:30:28.844968 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerStarted","Data":"7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf"} Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.328676 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.329212 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.329267 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.330129 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96d22823cb85c08d62e23a9b28d554ba658642fe85f23bff7568ba66ed62f3ed"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.330223 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://96d22823cb85c08d62e23a9b28d554ba658642fe85f23bff7568ba66ed62f3ed" gracePeriod=600 Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.770287 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.856670 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="96d22823cb85c08d62e23a9b28d554ba658642fe85f23bff7568ba66ed62f3ed" exitCode=0 Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.856740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"96d22823cb85c08d62e23a9b28d554ba658642fe85f23bff7568ba66ed62f3ed"} Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.856798 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8"} Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.856818 4858 scope.go:117] "RemoveContainer" containerID="955bc619bd742d004863858dd5a8f86f78a2f164e013b906e4efa16975027e52" Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.859669 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerStarted","Data":"f521e014f71f9991ed827a0ae545a83c3e5b200b5d422ead23c8ac9031dfd224"} Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.859871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.919312 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.876634448 podStartE2EDuration="7.919284509s" podCreationTimestamp="2026-01-27 20:30:22 +0000 UTC" firstStartedPulling="2026-01-27 20:30:23.330610712 +0000 UTC m=+1368.038426418" lastFinishedPulling="2026-01-27 20:30:28.373260773 +0000 UTC m=+1373.081076479" observedRunningTime="2026-01-27 20:30:29.904385551 +0000 UTC m=+1374.612201257" watchObservedRunningTime="2026-01-27 20:30:29.919284509 +0000 UTC m=+1374.627100215" Jan 27 20:30:29 crc kubenswrapper[4858]: I0127 20:30:29.932685 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:30 crc kubenswrapper[4858]: I0127 20:30:30.673825 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5bf568dbc7-xlg4d" Jan 27 20:30:31 crc kubenswrapper[4858]: I0127 20:30:31.032592 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 20:30:31 crc kubenswrapper[4858]: I0127 20:30:31.099058 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:31 crc kubenswrapper[4858]: I0127 20:30:31.880489 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="cinder-scheduler" containerID="cri-o://d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1" gracePeriod=30 Jan 27 20:30:31 crc kubenswrapper[4858]: I0127 20:30:31.880609 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="probe" containerID="cri-o://76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304" gracePeriod=30 Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.746623 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f7fd77bcb-cxmbt" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.887969 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.895096 4858 generic.go:334] "Generic (PLEG): container finished" podID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerID="76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304" exitCode=0 Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.895171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c","Type":"ContainerDied","Data":"76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304"} Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.896927 4858 generic.go:334] "Generic (PLEG): container finished" podID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerID="e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08" exitCode=0 Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.896954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f9dfd55dd-q9n8v" event={"ID":"3d6da88b-9fe4-4e4f-afdf-f8dccf939679","Type":"ContainerDied","Data":"e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08"} Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.896972 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f9dfd55dd-q9n8v" event={"ID":"3d6da88b-9fe4-4e4f-afdf-f8dccf939679","Type":"ContainerDied","Data":"aebd9c28c62e3cd62ffe1414a932881ca75567f27466f7217aedfe456dcd20db"} Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.896990 4858 scope.go:117] "RemoveContainer" containerID="739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.897138 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f9dfd55dd-q9n8v" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.934221 4858 scope.go:117] "RemoveContainer" containerID="e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.955693 4858 scope.go:117] "RemoveContainer" containerID="739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f" Jan 27 20:30:32 crc kubenswrapper[4858]: E0127 20:30:32.956248 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f\": container with ID starting with 739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f not found: ID does not exist" containerID="739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.956282 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f"} err="failed to get container status \"739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f\": rpc error: code = NotFound desc = could not find container \"739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f\": container with ID starting with 739c12354c4bba70f23579f5a79f3f3f786dbccb82067f18b91ac4392555ae6f not found: ID does not exist" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.956305 4858 scope.go:117] "RemoveContainer" containerID="e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08" Jan 27 20:30:32 crc kubenswrapper[4858]: E0127 20:30:32.957052 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08\": container with ID starting with e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08 not found: ID does not exist" containerID="e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.957081 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08"} err="failed to get container status \"e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08\": rpc error: code = NotFound desc = could not find container \"e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08\": container with ID starting with e288a89f8b1f784e2218c3343b66f105090d3e016c5e1e75c29e27c37a16cc08 not found: ID does not exist" Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.971472 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-combined-ca-bundle\") pod \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.971626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbgv9\" (UniqueName: \"kubernetes.io/projected/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-kube-api-access-cbgv9\") pod \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.971712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-httpd-config\") pod \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.971854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-config\") pod \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " Jan 27 20:30:32 crc kubenswrapper[4858]: I0127 20:30:32.971917 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-ovndb-tls-certs\") pod \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\" (UID: \"3d6da88b-9fe4-4e4f-afdf-f8dccf939679\") " Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.004587 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3d6da88b-9fe4-4e4f-afdf-f8dccf939679" (UID: "3d6da88b-9fe4-4e4f-afdf-f8dccf939679"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.013015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-kube-api-access-cbgv9" (OuterVolumeSpecName: "kube-api-access-cbgv9") pod "3d6da88b-9fe4-4e4f-afdf-f8dccf939679" (UID: "3d6da88b-9fe4-4e4f-afdf-f8dccf939679"). InnerVolumeSpecName "kube-api-access-cbgv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.059379 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-config" (OuterVolumeSpecName: "config") pod "3d6da88b-9fe4-4e4f-afdf-f8dccf939679" (UID: "3d6da88b-9fe4-4e4f-afdf-f8dccf939679"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.076856 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.076911 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbgv9\" (UniqueName: \"kubernetes.io/projected/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-kube-api-access-cbgv9\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.076924 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.081082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d6da88b-9fe4-4e4f-afdf-f8dccf939679" (UID: "3d6da88b-9fe4-4e4f-afdf-f8dccf939679"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.104882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3d6da88b-9fe4-4e4f-afdf-f8dccf939679" (UID: "3d6da88b-9fe4-4e4f-afdf-f8dccf939679"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.178572 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.178619 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d6da88b-9fe4-4e4f-afdf-f8dccf939679-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.240274 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f9dfd55dd-q9n8v"] Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.250178 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f9dfd55dd-q9n8v"] Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.924703 4858 generic.go:334] "Generic (PLEG): container finished" podID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" exitCode=1 Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.924780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerDied","Data":"7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf"} Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.925283 4858 scope.go:117] "RemoveContainer" containerID="4fe2e17cab0c4bc7715a3d67286a23a4375609195c4d9d669b3262a2b09ce1d8" Jan 27 20:30:33 crc kubenswrapper[4858]: I0127 20:30:33.926256 4858 scope.go:117] "RemoveContainer" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" Jan 27 20:30:33 crc kubenswrapper[4858]: E0127 20:30:33.926598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(99d9c559-c61f-4bc2-907b-af9f9be0ce1b)\"" pod="openstack/watcher-decision-engine-0" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.091773 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" path="/var/lib/kubelet/pods/3d6da88b-9fe4-4e4f-afdf-f8dccf939679/volumes" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.092658 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.092695 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.092705 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.092717 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.356156 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.434547 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 20:30:34 crc kubenswrapper[4858]: E0127 20:30:34.435109 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerName="init" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.435129 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerName="init" Jan 27 20:30:34 crc kubenswrapper[4858]: E0127 20:30:34.435153 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-api" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.435160 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-api" Jan 27 20:30:34 crc kubenswrapper[4858]: E0127 20:30:34.435189 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerName="dnsmasq-dns" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.435195 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerName="dnsmasq-dns" Jan 27 20:30:34 crc kubenswrapper[4858]: E0127 20:30:34.435209 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-httpd" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.435215 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-httpd" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.435410 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-api" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.435425 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76a7fd9-c6ae-468d-814d-4340d0312bcb" containerName="dnsmasq-dns" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.435438 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6da88b-9fe4-4e4f-afdf-f8dccf939679" containerName="neutron-httpd" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.436205 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.439447 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.439498 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-56lnx" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.440782 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.464285 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.514663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/892953fc-7620-4274-9f89-c86e2ec23782-openstack-config\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.514802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/892953fc-7620-4274-9f89-c86e2ec23782-openstack-config-secret\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.514940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krxz9\" (UniqueName: \"kubernetes.io/projected/892953fc-7620-4274-9f89-c86e2ec23782-kube-api-access-krxz9\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.514978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892953fc-7620-4274-9f89-c86e2ec23782-combined-ca-bundle\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.616942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krxz9\" (UniqueName: \"kubernetes.io/projected/892953fc-7620-4274-9f89-c86e2ec23782-kube-api-access-krxz9\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.617438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892953fc-7620-4274-9f89-c86e2ec23782-combined-ca-bundle\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.617672 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/892953fc-7620-4274-9f89-c86e2ec23782-openstack-config\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.617715 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/892953fc-7620-4274-9f89-c86e2ec23782-openstack-config-secret\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.618816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/892953fc-7620-4274-9f89-c86e2ec23782-openstack-config\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.634539 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892953fc-7620-4274-9f89-c86e2ec23782-combined-ca-bundle\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.638210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/892953fc-7620-4274-9f89-c86e2ec23782-openstack-config-secret\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.646298 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krxz9\" (UniqueName: \"kubernetes.io/projected/892953fc-7620-4274-9f89-c86e2ec23782-kube-api-access-krxz9\") pod \"openstackclient\" (UID: \"892953fc-7620-4274-9f89-c86e2ec23782\") " pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.735831 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69dcf58cf6-v246z" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.761750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.840801 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-757c454848-p8szs"] Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.841429 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-757c454848-p8szs" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api-log" containerID="cri-o://3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d" gracePeriod=30 Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.841930 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-757c454848-p8szs" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api" containerID="cri-o://d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75" gracePeriod=30 Jan 27 20:30:34 crc kubenswrapper[4858]: I0127 20:30:34.985522 4858 scope.go:117] "RemoveContainer" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" Jan 27 20:30:34 crc kubenswrapper[4858]: E0127 20:30:34.985756 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(99d9c559-c61f-4bc2-907b-af9f9be0ce1b)\"" pod="openstack/watcher-decision-engine-0" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" Jan 27 20:30:35 crc kubenswrapper[4858]: I0127 20:30:35.772764 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 20:30:35 crc kubenswrapper[4858]: W0127 20:30:35.778155 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892953fc_7620_4274_9f89_c86e2ec23782.slice/crio-72033f9dce29e8ce4fd118a7b1cab6558f8613a2abe7d546fcdc3c0a5f5ac7e1 WatchSource:0}: Error finding container 72033f9dce29e8ce4fd118a7b1cab6558f8613a2abe7d546fcdc3c0a5f5ac7e1: Status 404 returned error can't find the container with id 72033f9dce29e8ce4fd118a7b1cab6558f8613a2abe7d546fcdc3c0a5f5ac7e1 Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.006304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"892953fc-7620-4274-9f89-c86e2ec23782","Type":"ContainerStarted","Data":"72033f9dce29e8ce4fd118a7b1cab6558f8613a2abe7d546fcdc3c0a5f5ac7e1"} Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.020218 4858 generic.go:334] "Generic (PLEG): container finished" podID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerID="3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d" exitCode=143 Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.020272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757c454848-p8szs" event={"ID":"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06","Type":"ContainerDied","Data":"3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d"} Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.783747 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.875523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data\") pod \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.875749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-scripts\") pod \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.875779 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data-custom\") pod \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.875877 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-combined-ca-bundle\") pod \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.875913 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-etc-machine-id\") pod \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.875955 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5d8s\" (UniqueName: \"kubernetes.io/projected/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-kube-api-access-l5d8s\") pod \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\" (UID: \"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c\") " Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.877409 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" (UID: "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.900931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" (UID: "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.904230 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-scripts" (OuterVolumeSpecName: "scripts") pod "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" (UID: "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.910160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-kube-api-access-l5d8s" (OuterVolumeSpecName: "kube-api-access-l5d8s") pod "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" (UID: "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c"). InnerVolumeSpecName "kube-api-access-l5d8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.979319 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.979395 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.979412 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.979425 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5d8s\" (UniqueName: \"kubernetes.io/projected/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-kube-api-access-l5d8s\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:36 crc kubenswrapper[4858]: I0127 20:30:36.990346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" (UID: "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.049414 4858 generic.go:334] "Generic (PLEG): container finished" podID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerID="d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1" exitCode=0 Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.049463 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c","Type":"ContainerDied","Data":"d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1"} Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.049498 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c","Type":"ContainerDied","Data":"04f625e3e4fc97d10654024257243cdd11ae5ea4aa3f0690d1504c67d3448d0f"} Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.049521 4858 scope.go:117] "RemoveContainer" containerID="76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.049700 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.084180 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.150785 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data" (OuterVolumeSpecName: "config-data") pod "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" (UID: "e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.160200 4858 scope.go:117] "RemoveContainer" containerID="d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.187132 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.253816 4858 scope.go:117] "RemoveContainer" containerID="76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304" Jan 27 20:30:37 crc kubenswrapper[4858]: E0127 20:30:37.254636 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304\": container with ID starting with 76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304 not found: ID does not exist" containerID="76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.254669 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304"} err="failed to get container status \"76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304\": rpc error: code = NotFound desc = could not find container \"76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304\": container with ID starting with 76304cab4e075da41765444ea38b02f8c4cb97733112fbac3182ad9d44d8e304 not found: ID does not exist" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.254696 4858 scope.go:117] "RemoveContainer" containerID="d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1" Jan 27 20:30:37 crc kubenswrapper[4858]: E0127 20:30:37.255388 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1\": container with ID starting with d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1 not found: ID does not exist" containerID="d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.255456 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1"} err="failed to get container status \"d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1\": rpc error: code = NotFound desc = could not find container \"d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1\": container with ID starting with d9ea081c9b14825180aaf79b780b41a7e454a9460906cd9803de39b27f66f9c1 not found: ID does not exist" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.409668 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.424925 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.443995 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:37 crc kubenswrapper[4858]: E0127 20:30:37.444611 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="cinder-scheduler" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.444636 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="cinder-scheduler" Jan 27 20:30:37 crc kubenswrapper[4858]: E0127 20:30:37.444655 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="probe" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.444663 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="probe" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.444901 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="probe" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.444931 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" containerName="cinder-scheduler" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.446255 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.451063 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.485453 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.508680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.508738 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.508794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d853bb36-2749-40a8-9533-4caa077b1812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.508827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-config-data\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.508855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-scripts\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.509117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m88jx\" (UniqueName: \"kubernetes.io/projected/d853bb36-2749-40a8-9533-4caa077b1812-kube-api-access-m88jx\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.611107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m88jx\" (UniqueName: \"kubernetes.io/projected/d853bb36-2749-40a8-9533-4caa077b1812-kube-api-access-m88jx\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.611437 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.611458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.612253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d853bb36-2749-40a8-9533-4caa077b1812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.612295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-config-data\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.612320 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-scripts\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.612319 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d853bb36-2749-40a8-9533-4caa077b1812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.616513 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-scripts\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.616526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.620009 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-config-data\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.620339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d853bb36-2749-40a8-9533-4caa077b1812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.634849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m88jx\" (UniqueName: \"kubernetes.io/projected/d853bb36-2749-40a8-9533-4caa077b1812-kube-api-access-m88jx\") pod \"cinder-scheduler-0\" (UID: \"d853bb36-2749-40a8-9533-4caa077b1812\") " pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.773636 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 20:30:37 crc kubenswrapper[4858]: I0127 20:30:37.901903 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="29bdfd71-369f-46e8-be09-4e5b5bb22d1a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.186:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:30:38 crc kubenswrapper[4858]: I0127 20:30:38.104467 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c" path="/var/lib/kubelet/pods/e6a93d6b-a8fe-4c7f-9f8a-702d724bfd2c/volumes" Jan 27 20:30:38 crc kubenswrapper[4858]: I0127 20:30:38.171812 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 20:30:38 crc kubenswrapper[4858]: I0127 20:30:38.455193 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-757c454848-p8szs" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": read tcp 10.217.0.2:60108->10.217.0.182:9311: read: connection reset by peer" Jan 27 20:30:38 crc kubenswrapper[4858]: I0127 20:30:38.455945 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-757c454848-p8szs" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": read tcp 10.217.0.2:60116->10.217.0.182:9311: read: connection reset by peer" Jan 27 20:30:38 crc kubenswrapper[4858]: I0127 20:30:38.474197 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 20:30:38 crc kubenswrapper[4858]: E0127 20:30:38.940229 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953b3bbd_e4eb_48b1_afd8_6ac2b9050c06.slice/crio-d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953b3bbd_e4eb_48b1_afd8_6ac2b9050c06.slice/crio-conmon-d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75.scope\": RecentStats: unable to find data in memory cache]" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.014314 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.097630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d853bb36-2749-40a8-9533-4caa077b1812","Type":"ContainerStarted","Data":"b08742ba02c5429f650f245bdeadd37d40e9699cfbb5a208da01827a8a4704e8"} Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.104399 4858 generic.go:334] "Generic (PLEG): container finished" podID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerID="d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75" exitCode=0 Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.104458 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757c454848-p8szs" event={"ID":"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06","Type":"ContainerDied","Data":"d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75"} Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.104491 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757c454848-p8szs" event={"ID":"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06","Type":"ContainerDied","Data":"0cc5d8fe526044eb1f0c3315696572da373d15304b2de7ffe46f094435713c65"} Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.104514 4858 scope.go:117] "RemoveContainer" containerID="d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.104704 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-757c454848-p8szs" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.144195 4858 scope.go:117] "RemoveContainer" containerID="3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.169455 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngg4j\" (UniqueName: \"kubernetes.io/projected/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-kube-api-access-ngg4j\") pod \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.169600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-combined-ca-bundle\") pod \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.169715 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-logs\") pod \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.169770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data\") pod \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.169902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data-custom\") pod \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\" (UID: \"953b3bbd-e4eb-48b1-afd8-6ac2b9050c06\") " Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.172038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-logs" (OuterVolumeSpecName: "logs") pod "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" (UID: "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.177611 4858 scope.go:117] "RemoveContainer" containerID="d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.177747 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-kube-api-access-ngg4j" (OuterVolumeSpecName: "kube-api-access-ngg4j") pod "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" (UID: "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06"). InnerVolumeSpecName "kube-api-access-ngg4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:39 crc kubenswrapper[4858]: E0127 20:30:39.180318 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75\": container with ID starting with d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75 not found: ID does not exist" containerID="d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.180851 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75"} err="failed to get container status \"d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75\": rpc error: code = NotFound desc = could not find container \"d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75\": container with ID starting with d33b53914cfd421a68ccc079404ef982cde7f79ec3a4b2e53c171995dc752d75 not found: ID does not exist" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.180917 4858 scope.go:117] "RemoveContainer" containerID="3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d" Jan 27 20:30:39 crc kubenswrapper[4858]: E0127 20:30:39.183620 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d\": container with ID starting with 3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d not found: ID does not exist" containerID="3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.183646 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d"} err="failed to get container status \"3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d\": rpc error: code = NotFound desc = could not find container \"3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d\": container with ID starting with 3e388010533ba3604015535171cbac5c4ace16b31b2704dde6d62d1f013e153d not found: ID does not exist" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.204777 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" (UID: "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.231450 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" (UID: "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.257029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data" (OuterVolumeSpecName: "config-data") pod "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" (UID: "953b3bbd-e4eb-48b1-afd8-6ac2b9050c06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.272771 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.272834 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngg4j\" (UniqueName: \"kubernetes.io/projected/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-kube-api-access-ngg4j\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.272846 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.272855 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.272865 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.445627 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-757c454848-p8szs"] Jan 27 20:30:39 crc kubenswrapper[4858]: I0127 20:30:39.458440 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-757c454848-p8szs"] Jan 27 20:30:40 crc kubenswrapper[4858]: I0127 20:30:40.095233 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" path="/var/lib/kubelet/pods/953b3bbd-e4eb-48b1-afd8-6ac2b9050c06/volumes" Jan 27 20:30:40 crc kubenswrapper[4858]: I0127 20:30:40.138142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d853bb36-2749-40a8-9533-4caa077b1812","Type":"ContainerStarted","Data":"3fbb52ea6f0529e31f28a19852f5cbc6a947d78e680fb6e5e7f8aae92dfc0e48"} Jan 27 20:30:41 crc kubenswrapper[4858]: I0127 20:30:41.156099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d853bb36-2749-40a8-9533-4caa077b1812","Type":"ContainerStarted","Data":"24658d038036679376ff5ea0e6de0115915e1dfb3137f8f686060244bb7bd995"} Jan 27 20:30:41 crc kubenswrapper[4858]: I0127 20:30:41.188001 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.187974405 podStartE2EDuration="4.187974405s" podCreationTimestamp="2026-01-27 20:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:41.180788469 +0000 UTC m=+1385.888604185" watchObservedRunningTime="2026-01-27 20:30:41.187974405 +0000 UTC m=+1385.895790111" Jan 27 20:30:42 crc kubenswrapper[4858]: I0127 20:30:42.746971 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f7fd77bcb-cxmbt" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Jan 27 20:30:42 crc kubenswrapper[4858]: I0127 20:30:42.747583 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:30:42 crc kubenswrapper[4858]: I0127 20:30:42.773896 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.063516 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.064064 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="sg-core" containerID="cri-o://e35048be77a102b455c2b021f6947f7873f1a6ae3b276ee7fdc26580b43fc21e" gracePeriod=30 Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.064067 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="proxy-httpd" containerID="cri-o://f521e014f71f9991ed827a0ae545a83c3e5b200b5d422ead23c8ac9031dfd224" gracePeriod=30 Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.064150 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-notification-agent" containerID="cri-o://890c582e5cf0d5588add4cfb427399d9e59ae0f97f5f21735b8b2ff6155b3443" gracePeriod=30 Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.064035 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-central-agent" containerID="cri-o://998018ad68c3e321ad5ee5240e7b96c47151dd626b1e2a865bab3f619aba0d21" gracePeriod=30 Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.097336 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.185:3000/\": EOF" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.421073 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-574fc98977-sp7zp"] Jan 27 20:30:43 crc kubenswrapper[4858]: E0127 20:30:43.422826 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api-log" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.423065 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api-log" Jan 27 20:30:43 crc kubenswrapper[4858]: E0127 20:30:43.423190 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.423203 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.423540 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.423582 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="953b3bbd-e4eb-48b1-afd8-6ac2b9050c06" containerName="barbican-api-log" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.424751 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.428609 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.429645 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.435622 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.461590 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-574fc98977-sp7zp"] Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.567159 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/57e04641-598d-459b-9996-0ae4182ae4fb-etc-swift\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.567215 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-public-tls-certs\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.567290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-config-data\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.567466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-combined-ca-bundle\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.567670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8mhh\" (UniqueName: \"kubernetes.io/projected/57e04641-598d-459b-9996-0ae4182ae4fb-kube-api-access-c8mhh\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.567754 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-internal-tls-certs\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.567914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e04641-598d-459b-9996-0ae4182ae4fb-run-httpd\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.568081 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e04641-598d-459b-9996-0ae4182ae4fb-log-httpd\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.670339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-combined-ca-bundle\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.670514 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8mhh\" (UniqueName: \"kubernetes.io/projected/57e04641-598d-459b-9996-0ae4182ae4fb-kube-api-access-c8mhh\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.670577 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-internal-tls-certs\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.670629 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e04641-598d-459b-9996-0ae4182ae4fb-run-httpd\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.670700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e04641-598d-459b-9996-0ae4182ae4fb-log-httpd\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.671920 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e04641-598d-459b-9996-0ae4182ae4fb-run-httpd\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.672197 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/57e04641-598d-459b-9996-0ae4182ae4fb-log-httpd\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.670796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/57e04641-598d-459b-9996-0ae4182ae4fb-etc-swift\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.676038 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-public-tls-certs\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.676278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-config-data\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.680159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-combined-ca-bundle\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.680209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-internal-tls-certs\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.680777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/57e04641-598d-459b-9996-0ae4182ae4fb-etc-swift\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.686540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-public-tls-certs\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.687198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57e04641-598d-459b-9996-0ae4182ae4fb-config-data\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.693146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8mhh\" (UniqueName: \"kubernetes.io/projected/57e04641-598d-459b-9996-0ae4182ae4fb-kube-api-access-c8mhh\") pod \"swift-proxy-574fc98977-sp7zp\" (UID: \"57e04641-598d-459b-9996-0ae4182ae4fb\") " pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:43 crc kubenswrapper[4858]: I0127 20:30:43.750264 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:44 crc kubenswrapper[4858]: I0127 20:30:44.201575 4858 generic.go:334] "Generic (PLEG): container finished" podID="338280a3-8a99-4294-81bb-bff485d21e74" containerID="f521e014f71f9991ed827a0ae545a83c3e5b200b5d422ead23c8ac9031dfd224" exitCode=0 Jan 27 20:30:44 crc kubenswrapper[4858]: I0127 20:30:44.201910 4858 generic.go:334] "Generic (PLEG): container finished" podID="338280a3-8a99-4294-81bb-bff485d21e74" containerID="e35048be77a102b455c2b021f6947f7873f1a6ae3b276ee7fdc26580b43fc21e" exitCode=2 Jan 27 20:30:44 crc kubenswrapper[4858]: I0127 20:30:44.201628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerDied","Data":"f521e014f71f9991ed827a0ae545a83c3e5b200b5d422ead23c8ac9031dfd224"} Jan 27 20:30:44 crc kubenswrapper[4858]: I0127 20:30:44.202028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerDied","Data":"e35048be77a102b455c2b021f6947f7873f1a6ae3b276ee7fdc26580b43fc21e"} Jan 27 20:30:44 crc kubenswrapper[4858]: I0127 20:30:44.202047 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerDied","Data":"998018ad68c3e321ad5ee5240e7b96c47151dd626b1e2a865bab3f619aba0d21"} Jan 27 20:30:44 crc kubenswrapper[4858]: I0127 20:30:44.201991 4858 generic.go:334] "Generic (PLEG): container finished" podID="338280a3-8a99-4294-81bb-bff485d21e74" containerID="998018ad68c3e321ad5ee5240e7b96c47151dd626b1e2a865bab3f619aba0d21" exitCode=0 Jan 27 20:30:45 crc kubenswrapper[4858]: I0127 20:30:45.071074 4858 scope.go:117] "RemoveContainer" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" Jan 27 20:30:45 crc kubenswrapper[4858]: E0127 20:30:45.072128 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(99d9c559-c61f-4bc2-907b-af9f9be0ce1b)\"" pod="openstack/watcher-decision-engine-0" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" Jan 27 20:30:46 crc kubenswrapper[4858]: I0127 20:30:46.238888 4858 generic.go:334] "Generic (PLEG): container finished" podID="338280a3-8a99-4294-81bb-bff485d21e74" containerID="890c582e5cf0d5588add4cfb427399d9e59ae0f97f5f21735b8b2ff6155b3443" exitCode=0 Jan 27 20:30:46 crc kubenswrapper[4858]: I0127 20:30:46.238963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerDied","Data":"890c582e5cf0d5588add4cfb427399d9e59ae0f97f5f21735b8b2ff6155b3443"} Jan 27 20:30:46 crc kubenswrapper[4858]: I0127 20:30:46.904355 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:46 crc kubenswrapper[4858]: I0127 20:30:46.961675 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f9b655566-275d7" Jan 27 20:30:47 crc kubenswrapper[4858]: I0127 20:30:47.975507 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 20:30:48 crc kubenswrapper[4858]: I0127 20:30:48.230854 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod74707222-b7c2-4226-8df2-2459cb7d447c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod74707222-b7c2-4226-8df2-2459cb7d447c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod74707222_b7c2_4226_8df2_2459cb7d447c.slice" Jan 27 20:30:48 crc kubenswrapper[4858]: E0127 20:30:48.230926 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod74707222-b7c2-4226-8df2-2459cb7d447c] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod74707222-b7c2-4226-8df2-2459cb7d447c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod74707222_b7c2_4226_8df2_2459cb7d447c.slice" pod="openstack/horizon-6544888b69-dvcr4" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" Jan 27 20:30:48 crc kubenswrapper[4858]: I0127 20:30:48.260761 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6544888b69-dvcr4" Jan 27 20:30:48 crc kubenswrapper[4858]: I0127 20:30:48.290192 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6544888b69-dvcr4"] Jan 27 20:30:48 crc kubenswrapper[4858]: I0127 20:30:48.303696 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6544888b69-dvcr4"] Jan 27 20:30:48 crc kubenswrapper[4858]: I0127 20:30:48.437009 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:30:48 crc kubenswrapper[4858]: I0127 20:30:48.437302 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-log" containerID="cri-o://a0393d519aed8fda89dc4bb31ff6c539972cf05f32fc03dd05146e42e6ce41b4" gracePeriod=30 Jan 27 20:30:48 crc kubenswrapper[4858]: I0127 20:30:48.437477 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-httpd" containerID="cri-o://d4d28fe9406c16a682feccd8d2ec588758e8d95a1ded305d26db615a3a1729c4" gracePeriod=30 Jan 27 20:30:49 crc kubenswrapper[4858]: E0127 20:30:49.256664 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec05cb1_c40c_48cb_ba64_9321abb6287c.slice/crio-51a7810e6ed3102dd208860bde7beb41d43fac91b3815b7ecdc22e5d766e5ed9.scope\": RecentStats: unable to find data in memory cache]" Jan 27 20:30:49 crc kubenswrapper[4858]: I0127 20:30:49.287967 4858 generic.go:334] "Generic (PLEG): container finished" podID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerID="a0393d519aed8fda89dc4bb31ff6c539972cf05f32fc03dd05146e42e6ce41b4" exitCode=143 Jan 27 20:30:49 crc kubenswrapper[4858]: I0127 20:30:49.288034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508","Type":"ContainerDied","Data":"a0393d519aed8fda89dc4bb31ff6c539972cf05f32fc03dd05146e42e6ce41b4"} Jan 27 20:30:50 crc kubenswrapper[4858]: I0127 20:30:50.086425 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74707222-b7c2-4226-8df2-2459cb7d447c" path="/var/lib/kubelet/pods/74707222-b7c2-4226-8df2-2459cb7d447c/volumes" Jan 27 20:30:50 crc kubenswrapper[4858]: I0127 20:30:50.305843 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerID="51a7810e6ed3102dd208860bde7beb41d43fac91b3815b7ecdc22e5d766e5ed9" exitCode=137 Jan 27 20:30:50 crc kubenswrapper[4858]: I0127 20:30:50.305913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7fd77bcb-cxmbt" event={"ID":"2ec05cb1-c40c-48cb-ba64-9321abb6287c","Type":"ContainerDied","Data":"51a7810e6ed3102dd208860bde7beb41d43fac91b3815b7ecdc22e5d766e5ed9"} Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.267914 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.306233 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.335969 4858 generic.go:334] "Generic (PLEG): container finished" podID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerID="d4d28fe9406c16a682feccd8d2ec588758e8d95a1ded305d26db615a3a1729c4" exitCode=0 Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.336120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508","Type":"ContainerDied","Data":"d4d28fe9406c16a682feccd8d2ec588758e8d95a1ded305d26db615a3a1729c4"} Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.339836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f7fd77bcb-cxmbt" event={"ID":"2ec05cb1-c40c-48cb-ba64-9321abb6287c","Type":"ContainerDied","Data":"f70a38889bf4976d6a2973d48baab3f476e3c806a93b7aa505848883883d40cc"} Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.339893 4858 scope.go:117] "RemoveContainer" containerID="1fb5d262371a89abeed10a1670e4080ebaeb89f0f9b926b587ffc3cf13b2dccc" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.340068 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f7fd77bcb-cxmbt" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.343163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"892953fc-7620-4274-9f89-c86e2ec23782","Type":"ContainerStarted","Data":"f29933c19cce709bd978403b71f230fec113006005740d09d8b39ade62cf704c"} Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.345489 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"338280a3-8a99-4294-81bb-bff485d21e74","Type":"ContainerDied","Data":"45eb93a9225d6d3a18f9a07a442e078603fe06282db28aacbfab9e15c29b04bb"} Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.345573 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-scripts\") pod \"338280a3-8a99-4294-81bb-bff485d21e74\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dhbb\" (UniqueName: \"kubernetes.io/projected/338280a3-8a99-4294-81bb-bff485d21e74-kube-api-access-2dhbb\") pod \"338280a3-8a99-4294-81bb-bff485d21e74\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-scripts\") pod \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373403 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec05cb1-c40c-48cb-ba64-9321abb6287c-logs\") pod \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373425 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-tls-certs\") pod \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-config-data\") pod \"338280a3-8a99-4294-81bb-bff485d21e74\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wjkm\" (UniqueName: \"kubernetes.io/projected/2ec05cb1-c40c-48cb-ba64-9321abb6287c-kube-api-access-4wjkm\") pod \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373623 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-secret-key\") pod \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373672 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-sg-core-conf-yaml\") pod \"338280a3-8a99-4294-81bb-bff485d21e74\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373723 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-config-data\") pod \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373746 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-combined-ca-bundle\") pod \"338280a3-8a99-4294-81bb-bff485d21e74\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-run-httpd\") pod \"338280a3-8a99-4294-81bb-bff485d21e74\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-log-httpd\") pod \"338280a3-8a99-4294-81bb-bff485d21e74\" (UID: \"338280a3-8a99-4294-81bb-bff485d21e74\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.373873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-combined-ca-bundle\") pod \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\" (UID: \"2ec05cb1-c40c-48cb-ba64-9321abb6287c\") " Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.390058 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ec05cb1-c40c-48cb-ba64-9321abb6287c-logs" (OuterVolumeSpecName: "logs") pod "2ec05cb1-c40c-48cb-ba64-9321abb6287c" (UID: "2ec05cb1-c40c-48cb-ba64-9321abb6287c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.396507 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "338280a3-8a99-4294-81bb-bff485d21e74" (UID: "338280a3-8a99-4294-81bb-bff485d21e74"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.396715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-scripts" (OuterVolumeSpecName: "scripts") pod "338280a3-8a99-4294-81bb-bff485d21e74" (UID: "338280a3-8a99-4294-81bb-bff485d21e74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.397423 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "338280a3-8a99-4294-81bb-bff485d21e74" (UID: "338280a3-8a99-4294-81bb-bff485d21e74"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.403348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec05cb1-c40c-48cb-ba64-9321abb6287c-kube-api-access-4wjkm" (OuterVolumeSpecName: "kube-api-access-4wjkm") pod "2ec05cb1-c40c-48cb-ba64-9321abb6287c" (UID: "2ec05cb1-c40c-48cb-ba64-9321abb6287c"). InnerVolumeSpecName "kube-api-access-4wjkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.421997 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2ec05cb1-c40c-48cb-ba64-9321abb6287c" (UID: "2ec05cb1-c40c-48cb-ba64-9321abb6287c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.438907 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338280a3-8a99-4294-81bb-bff485d21e74-kube-api-access-2dhbb" (OuterVolumeSpecName: "kube-api-access-2dhbb") pod "338280a3-8a99-4294-81bb-bff485d21e74" (UID: "338280a3-8a99-4294-81bb-bff485d21e74"). InnerVolumeSpecName "kube-api-access-2dhbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.460197 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.483827132 podStartE2EDuration="17.460174095s" podCreationTimestamp="2026-01-27 20:30:34 +0000 UTC" firstStartedPulling="2026-01-27 20:30:35.780923457 +0000 UTC m=+1380.488739163" lastFinishedPulling="2026-01-27 20:30:50.75727042 +0000 UTC m=+1395.465086126" observedRunningTime="2026-01-27 20:30:51.378606666 +0000 UTC m=+1396.086422402" watchObservedRunningTime="2026-01-27 20:30:51.460174095 +0000 UTC m=+1396.167989801" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.474697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ec05cb1-c40c-48cb-ba64-9321abb6287c" (UID: "2ec05cb1-c40c-48cb-ba64-9321abb6287c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476275 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476296 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476306 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476318 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dhbb\" (UniqueName: \"kubernetes.io/projected/338280a3-8a99-4294-81bb-bff485d21e74-kube-api-access-2dhbb\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476328 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ec05cb1-c40c-48cb-ba64-9321abb6287c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476337 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wjkm\" (UniqueName: \"kubernetes.io/projected/2ec05cb1-c40c-48cb-ba64-9321abb6287c-kube-api-access-4wjkm\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476345 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.476353 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/338280a3-8a99-4294-81bb-bff485d21e74-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.525096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-scripts" (OuterVolumeSpecName: "scripts") pod "2ec05cb1-c40c-48cb-ba64-9321abb6287c" (UID: "2ec05cb1-c40c-48cb-ba64-9321abb6287c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.530025 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-config-data" (OuterVolumeSpecName: "config-data") pod "2ec05cb1-c40c-48cb-ba64-9321abb6287c" (UID: "2ec05cb1-c40c-48cb-ba64-9321abb6287c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.550733 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "338280a3-8a99-4294-81bb-bff485d21e74" (UID: "338280a3-8a99-4294-81bb-bff485d21e74"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.582895 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.582951 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.582964 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ec05cb1-c40c-48cb-ba64-9321abb6287c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.590972 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2ec05cb1-c40c-48cb-ba64-9321abb6287c" (UID: "2ec05cb1-c40c-48cb-ba64-9321abb6287c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.608282 4858 scope.go:117] "RemoveContainer" containerID="51a7810e6ed3102dd208860bde7beb41d43fac91b3815b7ecdc22e5d766e5ed9" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.687395 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ec05cb1-c40c-48cb-ba64-9321abb6287c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.693353 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-574fc98977-sp7zp"] Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.729334 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "338280a3-8a99-4294-81bb-bff485d21e74" (UID: "338280a3-8a99-4294-81bb-bff485d21e74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.767720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-config-data" (OuterVolumeSpecName: "config-data") pod "338280a3-8a99-4294-81bb-bff485d21e74" (UID: "338280a3-8a99-4294-81bb-bff485d21e74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.792808 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.792845 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338280a3-8a99-4294-81bb-bff485d21e74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.871333 4858 scope.go:117] "RemoveContainer" containerID="f521e014f71f9991ed827a0ae545a83c3e5b200b5d422ead23c8ac9031dfd224" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.891538 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.919614 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f7fd77bcb-cxmbt"] Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.956032 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5f7fd77bcb-cxmbt"] Jan 27 20:30:51 crc kubenswrapper[4858]: I0127 20:30:51.959782 4858 scope.go:117] "RemoveContainer" containerID="e35048be77a102b455c2b021f6947f7873f1a6ae3b276ee7fdc26580b43fc21e" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:51.999828 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:51.999913 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-combined-ca-bundle\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:51.999940 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-httpd-run\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:51.999985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-scripts\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.000074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l654n\" (UniqueName: \"kubernetes.io/projected/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-kube-api-access-l654n\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.000146 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-config-data\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.000300 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-public-tls-certs\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.000332 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-logs\") pod \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\" (UID: \"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508\") " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.001459 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-logs" (OuterVolumeSpecName: "logs") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:51.999931 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.002159 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.009168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.032695 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-scripts" (OuterVolumeSpecName: "scripts") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.034020 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.035388 4858 scope.go:117] "RemoveContainer" containerID="890c582e5cf0d5588add4cfb427399d9e59ae0f97f5f21735b8b2ff6155b3443" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.048929 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-kube-api-access-l654n" (OuterVolumeSpecName: "kube-api-access-l654n") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "kube-api-access-l654n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.057811 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058309 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-central-agent" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058327 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-central-agent" Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058335 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="sg-core" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058341 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="sg-core" Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058360 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="proxy-httpd" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058367 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="proxy-httpd" Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058385 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058393 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058401 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon-log" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058406 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon-log" Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058418 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-notification-agent" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058427 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-notification-agent" Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058438 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-httpd" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058445 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-httpd" Jan 27 20:30:52 crc kubenswrapper[4858]: E0127 20:30:52.058464 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-log" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058470 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-log" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058783 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-notification-agent" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058805 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-log" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058816 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="proxy-httpd" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058830 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon-log" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058927 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="ceilometer-central-agent" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058943 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" containerName="glance-httpd" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058955 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" containerName="horizon" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.058965 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="338280a3-8a99-4294-81bb-bff485d21e74" containerName="sg-core" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.060699 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.067804 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.069909 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.072487 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.088740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.089226 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-config-data" (OuterVolumeSpecName: "config-data") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.090751 4858 scope.go:117] "RemoveContainer" containerID="998018ad68c3e321ad5ee5240e7b96c47151dd626b1e2a865bab3f619aba0d21" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.105021 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.105062 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.105073 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.105082 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.105092 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.105101 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l654n\" (UniqueName: \"kubernetes.io/projected/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-kube-api-access-l654n\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.105109 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.106815 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec05cb1-c40c-48cb-ba64-9321abb6287c" path="/var/lib/kubelet/pods/2ec05cb1-c40c-48cb-ba64-9321abb6287c/volumes" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.112472 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="338280a3-8a99-4294-81bb-bff485d21e74" path="/var/lib/kubelet/pods/338280a3-8a99-4294-81bb-bff485d21e74/volumes" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.141439 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.172891 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" (UID: "8f0c007a-8fb6-4995-ad7e-c3d06d3e5508"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.206703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.206757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-run-httpd\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.206818 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c662\" (UniqueName: \"kubernetes.io/projected/15639292-7397-4047-a813-75884683c2f9-kube-api-access-2c662\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.206914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.207275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-config-data\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.207365 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-log-httpd\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.207434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-scripts\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.207632 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.207658 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.308922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.308994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-run-httpd\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.309036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c662\" (UniqueName: \"kubernetes.io/projected/15639292-7397-4047-a813-75884683c2f9-kube-api-access-2c662\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.309062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.309136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-config-data\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.309167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-log-httpd\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.309191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-scripts\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.309809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-run-httpd\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.312186 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-log-httpd\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.314443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-scripts\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.317291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.319126 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.319881 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-config-data\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.337816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c662\" (UniqueName: \"kubernetes.io/projected/15639292-7397-4047-a813-75884683c2f9-kube-api-access-2c662\") pod \"ceilometer-0\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.357362 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-574fc98977-sp7zp" event={"ID":"57e04641-598d-459b-9996-0ae4182ae4fb","Type":"ContainerStarted","Data":"3e84d3b1baaea8f2afb95f90129de3f3e69a988ed7906b9c64f2502b897ad261"} Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.357412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-574fc98977-sp7zp" event={"ID":"57e04641-598d-459b-9996-0ae4182ae4fb","Type":"ContainerStarted","Data":"85c30a90fbcc99fc66d2b072a46f5dc4642a1c36ae1445658c7235a7a7055bb2"} Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.357422 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-574fc98977-sp7zp" event={"ID":"57e04641-598d-459b-9996-0ae4182ae4fb","Type":"ContainerStarted","Data":"0b52935ac8b67f6ea0b72b287d3279887196bcf27f5221b9908d6d997bd614d1"} Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.357687 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.357720 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.367668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8f0c007a-8fb6-4995-ad7e-c3d06d3e5508","Type":"ContainerDied","Data":"6acb91f160503db50a5b4cf35d8fd00e145acfea6af2eaf197de4555e60d0fc3"} Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.367778 4858 scope.go:117] "RemoveContainer" containerID="d4d28fe9406c16a682feccd8d2ec588758e8d95a1ded305d26db615a3a1729c4" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.367819 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.387115 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-574fc98977-sp7zp" podStartSLOduration=9.387089169 podStartE2EDuration="9.387089169s" podCreationTimestamp="2026-01-27 20:30:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:52.380847 +0000 UTC m=+1397.088662696" watchObservedRunningTime="2026-01-27 20:30:52.387089169 +0000 UTC m=+1397.094904875" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.402990 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod7b795cea-c66d-4bca-8e9c-7da6cf08adf8"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod7b795cea-c66d-4bca-8e9c-7da6cf08adf8] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7b795cea_c66d_4bca_8e9c_7da6cf08adf8.slice" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.422962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.425821 4858 scope.go:117] "RemoveContainer" containerID="a0393d519aed8fda89dc4bb31ff6c539972cf05f32fc03dd05146e42e6ce41b4" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.426419 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.464916 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.506934 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.508703 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.513189 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.514010 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.546418 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.619751 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfb2\" (UniqueName: \"kubernetes.io/projected/be03aee4-8299-48e7-91cb-18bbad0b2a0b-kube-api-access-bwfb2\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.620298 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.620386 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.620424 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be03aee4-8299-48e7-91cb-18bbad0b2a0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.620460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.620494 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.620616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be03aee4-8299-48e7-91cb-18bbad0b2a0b-logs\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.620641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be03aee4-8299-48e7-91cb-18bbad0b2a0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be03aee4-8299-48e7-91cb-18bbad0b2a0b-logs\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723324 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfb2\" (UniqueName: \"kubernetes.io/projected/be03aee4-8299-48e7-91cb-18bbad0b2a0b-kube-api-access-bwfb2\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.723360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.725198 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.729304 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be03aee4-8299-48e7-91cb-18bbad0b2a0b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.729366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be03aee4-8299-48e7-91cb-18bbad0b2a0b-logs\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.736797 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-config-data\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.739222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.746613 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-scripts\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.757540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be03aee4-8299-48e7-91cb-18bbad0b2a0b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.772419 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfb2\" (UniqueName: \"kubernetes.io/projected/be03aee4-8299-48e7-91cb-18bbad0b2a0b-kube-api-access-bwfb2\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.801544 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"be03aee4-8299-48e7-91cb-18bbad0b2a0b\") " pod="openstack/glance-default-external-api-0" Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.879291 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.890903 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:30:52 crc kubenswrapper[4858]: I0127 20:30:52.961254 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.397632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerStarted","Data":"fc81cdf8bad34c069d825b525d0e89d346a90d516750c9ecd5f9f2541be7daa2"} Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.398100 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerStarted","Data":"0de017f9500bce8eb61431d758d246fdfafb3ea160230f4fa9e3ecd9af24a099"} Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.459468 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.459755 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-log" containerID="cri-o://7d79048978fb62d2e5df62dc2ddbf6bd30ceeac4da867cc49ca0ef6342be60f8" gracePeriod=30 Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.459832 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-httpd" containerID="cri-o://70d9d0e5fd8930a93991dc7409a9cf825437ba8786be526d91ab11d45308c4a0" gracePeriod=30 Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.476737 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": EOF" Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.478026 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": EOF" Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.700719 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 20:30:53 crc kubenswrapper[4858]: W0127 20:30:53.707431 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe03aee4_8299_48e7_91cb_18bbad0b2a0b.slice/crio-ce9ad13399cec50d2dc89d4b348230dafbe6d4a0f017f7ea09c4098338eb8c3a WatchSource:0}: Error finding container ce9ad13399cec50d2dc89d4b348230dafbe6d4a0f017f7ea09c4098338eb8c3a: Status 404 returned error can't find the container with id ce9ad13399cec50d2dc89d4b348230dafbe6d4a0f017f7ea09c4098338eb8c3a Jan 27 20:30:53 crc kubenswrapper[4858]: I0127 20:30:53.735703 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:30:54 crc kubenswrapper[4858]: I0127 20:30:54.088724 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0c007a-8fb6-4995-ad7e-c3d06d3e5508" path="/var/lib/kubelet/pods/8f0c007a-8fb6-4995-ad7e-c3d06d3e5508/volumes" Jan 27 20:30:54 crc kubenswrapper[4858]: I0127 20:30:54.419294 4858 generic.go:334] "Generic (PLEG): container finished" podID="01d3077e-3576-4247-a840-2cb60819c113" containerID="7d79048978fb62d2e5df62dc2ddbf6bd30ceeac4da867cc49ca0ef6342be60f8" exitCode=143 Jan 27 20:30:54 crc kubenswrapper[4858]: I0127 20:30:54.419356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"01d3077e-3576-4247-a840-2cb60819c113","Type":"ContainerDied","Data":"7d79048978fb62d2e5df62dc2ddbf6bd30ceeac4da867cc49ca0ef6342be60f8"} Jan 27 20:30:54 crc kubenswrapper[4858]: I0127 20:30:54.427045 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerStarted","Data":"4bdda2d757a0753d0a05c9bec69236de2b9bb2cd8d1f51adf24109219764a0cc"} Jan 27 20:30:54 crc kubenswrapper[4858]: I0127 20:30:54.431156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be03aee4-8299-48e7-91cb-18bbad0b2a0b","Type":"ContainerStarted","Data":"ce9ad13399cec50d2dc89d4b348230dafbe6d4a0f017f7ea09c4098338eb8c3a"} Jan 27 20:30:55 crc kubenswrapper[4858]: I0127 20:30:55.495694 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be03aee4-8299-48e7-91cb-18bbad0b2a0b","Type":"ContainerStarted","Data":"8e26b90392135e472d3a506e7bb6682741d756f136003ead81b7720a09701edb"} Jan 27 20:30:55 crc kubenswrapper[4858]: I0127 20:30:55.506936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerStarted","Data":"f5b64abe75acaf03720b3fabcdd60af806c0dc7d29867d4828e0f47276bd0ff3"} Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.520024 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-central-agent" containerID="cri-o://fc81cdf8bad34c069d825b525d0e89d346a90d516750c9ecd5f9f2541be7daa2" gracePeriod=30 Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.520095 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="sg-core" containerID="cri-o://f5b64abe75acaf03720b3fabcdd60af806c0dc7d29867d4828e0f47276bd0ff3" gracePeriod=30 Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.520109 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="proxy-httpd" containerID="cri-o://bdd6b29e6b736e37323282c84fe0e2e5d2ea65af773646740c9528543bdc3695" gracePeriod=30 Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.520163 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-notification-agent" containerID="cri-o://4bdda2d757a0753d0a05c9bec69236de2b9bb2cd8d1f51adf24109219764a0cc" gracePeriod=30 Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.520215 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerStarted","Data":"bdd6b29e6b736e37323282c84fe0e2e5d2ea65af773646740c9528543bdc3695"} Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.521911 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.525977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be03aee4-8299-48e7-91cb-18bbad0b2a0b","Type":"ContainerStarted","Data":"46081d74b5492b5ef4fb7cfb97dcc1b284ddf7e361b1e5f76b7185d4abde27b2"} Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.546679 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.302867643 podStartE2EDuration="5.546651826s" podCreationTimestamp="2026-01-27 20:30:51 +0000 UTC" firstStartedPulling="2026-01-27 20:30:52.890672747 +0000 UTC m=+1397.598488453" lastFinishedPulling="2026-01-27 20:30:56.13445693 +0000 UTC m=+1400.842272636" observedRunningTime="2026-01-27 20:30:56.545890344 +0000 UTC m=+1401.253706050" watchObservedRunningTime="2026-01-27 20:30:56.546651826 +0000 UTC m=+1401.254467532" Jan 27 20:30:56 crc kubenswrapper[4858]: I0127 20:30:56.585036 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.585014271 podStartE2EDuration="4.585014271s" podCreationTimestamp="2026-01-27 20:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:30:56.575038193 +0000 UTC m=+1401.282853899" watchObservedRunningTime="2026-01-27 20:30:56.585014271 +0000 UTC m=+1401.292829977" Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.604691 4858 generic.go:334] "Generic (PLEG): container finished" podID="01d3077e-3576-4247-a840-2cb60819c113" containerID="70d9d0e5fd8930a93991dc7409a9cf825437ba8786be526d91ab11d45308c4a0" exitCode=0 Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.605272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"01d3077e-3576-4247-a840-2cb60819c113","Type":"ContainerDied","Data":"70d9d0e5fd8930a93991dc7409a9cf825437ba8786be526d91ab11d45308c4a0"} Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.661852 4858 generic.go:334] "Generic (PLEG): container finished" podID="15639292-7397-4047-a813-75884683c2f9" containerID="bdd6b29e6b736e37323282c84fe0e2e5d2ea65af773646740c9528543bdc3695" exitCode=0 Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.661885 4858 generic.go:334] "Generic (PLEG): container finished" podID="15639292-7397-4047-a813-75884683c2f9" containerID="f5b64abe75acaf03720b3fabcdd60af806c0dc7d29867d4828e0f47276bd0ff3" exitCode=2 Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.661893 4858 generic.go:334] "Generic (PLEG): container finished" podID="15639292-7397-4047-a813-75884683c2f9" containerID="4bdda2d757a0753d0a05c9bec69236de2b9bb2cd8d1f51adf24109219764a0cc" exitCode=0 Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.662888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerDied","Data":"bdd6b29e6b736e37323282c84fe0e2e5d2ea65af773646740c9528543bdc3695"} Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.662916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerDied","Data":"f5b64abe75acaf03720b3fabcdd60af806c0dc7d29867d4828e0f47276bd0ff3"} Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.662927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerDied","Data":"4bdda2d757a0753d0a05c9bec69236de2b9bb2cd8d1f51adf24109219764a0cc"} Jan 27 20:30:57 crc kubenswrapper[4858]: I0127 20:30:57.948937 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.068774 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jwxm\" (UniqueName: \"kubernetes.io/projected/01d3077e-3576-4247-a840-2cb60819c113-kube-api-access-6jwxm\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.068928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.069014 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-internal-tls-certs\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.069110 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-scripts\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.069248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-config-data\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.069276 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-combined-ca-bundle\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.069315 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-logs\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.069460 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-httpd-run\") pod \"01d3077e-3576-4247-a840-2cb60819c113\" (UID: \"01d3077e-3576-4247-a840-2cb60819c113\") " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.070502 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.073275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-logs" (OuterVolumeSpecName: "logs") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.074438 4858 scope.go:117] "RemoveContainer" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.078495 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-scripts" (OuterVolumeSpecName: "scripts") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.078683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d3077e-3576-4247-a840-2cb60819c113-kube-api-access-6jwxm" (OuterVolumeSpecName: "kube-api-access-6jwxm") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "kube-api-access-6jwxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.091874 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.144477 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.157509 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.172106 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.172434 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.172515 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.173971 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.174131 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/01d3077e-3576-4247-a840-2cb60819c113-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.174219 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jwxm\" (UniqueName: \"kubernetes.io/projected/01d3077e-3576-4247-a840-2cb60819c113-kube-api-access-6jwxm\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.174319 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.186145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-config-data" (OuterVolumeSpecName: "config-data") pod "01d3077e-3576-4247-a840-2cb60819c113" (UID: "01d3077e-3576-4247-a840-2cb60819c113"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.233542 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.276742 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.277053 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d3077e-3576-4247-a840-2cb60819c113-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.706911 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerStarted","Data":"26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba"} Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.709372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"01d3077e-3576-4247-a840-2cb60819c113","Type":"ContainerDied","Data":"51117531e0344114aa8578e89498ec566e883afdb6b6b0acb5a14f94f991dccf"} Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.709411 4858 scope.go:117] "RemoveContainer" containerID="70d9d0e5fd8930a93991dc7409a9cf825437ba8786be526d91ab11d45308c4a0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.711758 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.746394 4858 scope.go:117] "RemoveContainer" containerID="7d79048978fb62d2e5df62dc2ddbf6bd30ceeac4da867cc49ca0ef6342be60f8" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.770751 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.783373 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.816481 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:30:58 crc kubenswrapper[4858]: E0127 20:30:58.817011 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-httpd" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.817030 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-httpd" Jan 27 20:30:58 crc kubenswrapper[4858]: E0127 20:30:58.817039 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-log" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.817046 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-log" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.817265 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-log" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.817291 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d3077e-3576-4247-a840-2cb60819c113" containerName="glance-httpd" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.818506 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.820610 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.820925 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.834463 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.839081 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-574fc98977-sp7zp" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.842676 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-config-data\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-scripts\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719be79f-34ec-4e95-b1a7-e507c6214053-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910725 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910770 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gftn\" (UniqueName: \"kubernetes.io/projected/719be79f-34ec-4e95-b1a7-e507c6214053-kube-api-access-9gftn\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719be79f-34ec-4e95-b1a7-e507c6214053-logs\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:58 crc kubenswrapper[4858]: I0127 20:30:58.910900 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-config-data\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-scripts\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719be79f-34ec-4e95-b1a7-e507c6214053-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012455 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gftn\" (UniqueName: \"kubernetes.io/projected/719be79f-34ec-4e95-b1a7-e507c6214053-kube-api-access-9gftn\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719be79f-34ec-4e95-b1a7-e507c6214053-logs\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.012573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.013148 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.014193 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/719be79f-34ec-4e95-b1a7-e507c6214053-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.016295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/719be79f-34ec-4e95-b1a7-e507c6214053-logs\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.021480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.024121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-config-data\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.029287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.034472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/719be79f-34ec-4e95-b1a7-e507c6214053-scripts\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.048905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gftn\" (UniqueName: \"kubernetes.io/projected/719be79f-34ec-4e95-b1a7-e507c6214053-kube-api-access-9gftn\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.100703 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"719be79f-34ec-4e95-b1a7-e507c6214053\") " pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.148317 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 20:30:59 crc kubenswrapper[4858]: I0127 20:30:59.774301 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 20:30:59 crc kubenswrapper[4858]: W0127 20:30:59.784438 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod719be79f_34ec_4e95_b1a7_e507c6214053.slice/crio-256fd623ec4abeac57583d5b6f8732222958d30e616491460a4e4bd251422dc9 WatchSource:0}: Error finding container 256fd623ec4abeac57583d5b6f8732222958d30e616491460a4e4bd251422dc9: Status 404 returned error can't find the container with id 256fd623ec4abeac57583d5b6f8732222958d30e616491460a4e4bd251422dc9 Jan 27 20:31:00 crc kubenswrapper[4858]: I0127 20:31:00.103989 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d3077e-3576-4247-a840-2cb60819c113" path="/var/lib/kubelet/pods/01d3077e-3576-4247-a840-2cb60819c113/volumes" Jan 27 20:31:00 crc kubenswrapper[4858]: I0127 20:31:00.754857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719be79f-34ec-4e95-b1a7-e507c6214053","Type":"ContainerStarted","Data":"f1db1a9e0d717247797b436f666c9938e66d043fa2dcfbad179a41bbd0bdc359"} Jan 27 20:31:00 crc kubenswrapper[4858]: I0127 20:31:00.755269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719be79f-34ec-4e95-b1a7-e507c6214053","Type":"ContainerStarted","Data":"256fd623ec4abeac57583d5b6f8732222958d30e616491460a4e4bd251422dc9"} Jan 27 20:31:01 crc kubenswrapper[4858]: I0127 20:31:01.767190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"719be79f-34ec-4e95-b1a7-e507c6214053","Type":"ContainerStarted","Data":"4729cc901aea2e1fb8959804c049fc36267202308a35d33aae9a2b6658139632"} Jan 27 20:31:01 crc kubenswrapper[4858]: I0127 20:31:01.794836 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.794812661 podStartE2EDuration="3.794812661s" podCreationTimestamp="2026-01-27 20:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:01.790323992 +0000 UTC m=+1406.498139708" watchObservedRunningTime="2026-01-27 20:31:01.794812661 +0000 UTC m=+1406.502628367" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.153526 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hns4w"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.161327 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.180376 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hns4w"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.193793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-operator-scripts\") pod \"nova-api-db-create-hns4w\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.193901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654c2\" (UniqueName: \"kubernetes.io/projected/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-kube-api-access-654c2\") pod \"nova-api-db-create-hns4w\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.281826 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zhg58"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.283588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.302367 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-operator-scripts\") pod \"nova-api-db-create-hns4w\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.307478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-operator-scripts\") pod \"nova-api-db-create-hns4w\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.307830 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654c2\" (UniqueName: \"kubernetes.io/projected/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-kube-api-access-654c2\") pod \"nova-api-db-create-hns4w\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.333459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654c2\" (UniqueName: \"kubernetes.io/projected/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-kube-api-access-654c2\") pod \"nova-api-db-create-hns4w\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.333828 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zhg58"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.383759 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-2drzb"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.396157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.409748 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82jp\" (UniqueName: \"kubernetes.io/projected/63cacc1d-9f19-4bc5-aec0-93d97976666a-kube-api-access-l82jp\") pod \"nova-cell0-db-create-zhg58\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.409822 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cacc1d-9f19-4bc5-aec0-93d97976666a-operator-scripts\") pod \"nova-cell0-db-create-zhg58\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.427256 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ce78-account-create-update-597dz"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.438727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.455773 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.473721 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ce78-account-create-update-597dz"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.492297 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2drzb"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.512625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82jp\" (UniqueName: \"kubernetes.io/projected/63cacc1d-9f19-4bc5-aec0-93d97976666a-kube-api-access-l82jp\") pod \"nova-cell0-db-create-zhg58\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.512724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-operator-scripts\") pod \"nova-api-ce78-account-create-update-597dz\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.513041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cacc1d-9f19-4bc5-aec0-93d97976666a-operator-scripts\") pod \"nova-cell0-db-create-zhg58\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.513125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8wr\" (UniqueName: \"kubernetes.io/projected/27f669f0-2cb1-47cb-b220-758381725229-kube-api-access-cg8wr\") pod \"nova-cell1-db-create-2drzb\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.513304 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27f669f0-2cb1-47cb-b220-758381725229-operator-scripts\") pod \"nova-cell1-db-create-2drzb\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.513443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ms9s\" (UniqueName: \"kubernetes.io/projected/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-kube-api-access-7ms9s\") pod \"nova-api-ce78-account-create-update-597dz\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.513988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cacc1d-9f19-4bc5-aec0-93d97976666a-operator-scripts\") pod \"nova-cell0-db-create-zhg58\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.539675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82jp\" (UniqueName: \"kubernetes.io/projected/63cacc1d-9f19-4bc5-aec0-93d97976666a-kube-api-access-l82jp\") pod \"nova-cell0-db-create-zhg58\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.570922 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d500-account-create-update-nh98x"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.572263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.574826 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.594368 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.598778 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d500-account-create-update-nh98x"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.614007 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.614768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27f669f0-2cb1-47cb-b220-758381725229-operator-scripts\") pod \"nova-cell1-db-create-2drzb\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.614846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ms9s\" (UniqueName: \"kubernetes.io/projected/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-kube-api-access-7ms9s\") pod \"nova-api-ce78-account-create-update-597dz\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.614915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358dc567-a5bf-4e80-843c-dabb5e3535e2-operator-scripts\") pod \"nova-cell0-d500-account-create-update-nh98x\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.614961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-operator-scripts\") pod \"nova-api-ce78-account-create-update-597dz\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.615008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8wr\" (UniqueName: \"kubernetes.io/projected/27f669f0-2cb1-47cb-b220-758381725229-kube-api-access-cg8wr\") pod \"nova-cell1-db-create-2drzb\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.615040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvr9w\" (UniqueName: \"kubernetes.io/projected/358dc567-a5bf-4e80-843c-dabb5e3535e2-kube-api-access-wvr9w\") pod \"nova-cell0-d500-account-create-update-nh98x\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.615940 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27f669f0-2cb1-47cb-b220-758381725229-operator-scripts\") pod \"nova-cell1-db-create-2drzb\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.619363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-operator-scripts\") pod \"nova-api-ce78-account-create-update-597dz\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.640148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ms9s\" (UniqueName: \"kubernetes.io/projected/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-kube-api-access-7ms9s\") pod \"nova-api-ce78-account-create-update-597dz\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.655069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8wr\" (UniqueName: \"kubernetes.io/projected/27f669f0-2cb1-47cb-b220-758381725229-kube-api-access-cg8wr\") pod \"nova-cell1-db-create-2drzb\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.718986 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358dc567-a5bf-4e80-843c-dabb5e3535e2-operator-scripts\") pod \"nova-cell0-d500-account-create-update-nh98x\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.719093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvr9w\" (UniqueName: \"kubernetes.io/projected/358dc567-a5bf-4e80-843c-dabb5e3535e2-kube-api-access-wvr9w\") pod \"nova-cell0-d500-account-create-update-nh98x\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.719766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358dc567-a5bf-4e80-843c-dabb5e3535e2-operator-scripts\") pod \"nova-cell0-d500-account-create-update-nh98x\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.736071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.749614 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvr9w\" (UniqueName: \"kubernetes.io/projected/358dc567-a5bf-4e80-843c-dabb5e3535e2-kube-api-access-wvr9w\") pod \"nova-cell0-d500-account-create-update-nh98x\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.771491 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-48b0-account-create-update-gwf9m"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.773804 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.777695 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.778436 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.792086 4858 generic.go:334] "Generic (PLEG): container finished" podID="15639292-7397-4047-a813-75884683c2f9" containerID="fc81cdf8bad34c069d825b525d0e89d346a90d516750c9ecd5f9f2541be7daa2" exitCode=0 Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.793304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerDied","Data":"fc81cdf8bad34c069d825b525d0e89d346a90d516750c9ecd5f9f2541be7daa2"} Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.822755 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7zv\" (UniqueName: \"kubernetes.io/projected/90043c41-134e-47fc-9086-1c45a761a7c0-kube-api-access-lg7zv\") pod \"nova-cell1-48b0-account-create-update-gwf9m\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.823126 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90043c41-134e-47fc-9086-1c45a761a7c0-operator-scripts\") pod \"nova-cell1-48b0-account-create-update-gwf9m\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.833570 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-48b0-account-create-update-gwf9m"] Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.905190 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.927129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg7zv\" (UniqueName: \"kubernetes.io/projected/90043c41-134e-47fc-9086-1c45a761a7c0-kube-api-access-lg7zv\") pod \"nova-cell1-48b0-account-create-update-gwf9m\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.927189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90043c41-134e-47fc-9086-1c45a761a7c0-operator-scripts\") pod \"nova-cell1-48b0-account-create-update-gwf9m\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.927986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90043c41-134e-47fc-9086-1c45a761a7c0-operator-scripts\") pod \"nova-cell1-48b0-account-create-update-gwf9m\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.933015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.962733 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.962795 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 20:31:02 crc kubenswrapper[4858]: I0127 20:31:02.968680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg7zv\" (UniqueName: \"kubernetes.io/projected/90043c41-134e-47fc-9086-1c45a761a7c0-kube-api-access-lg7zv\") pod \"nova-cell1-48b0-account-create-update-gwf9m\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.045197 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.046962 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-combined-ca-bundle\") pod \"15639292-7397-4047-a813-75884683c2f9\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.047042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-sg-core-conf-yaml\") pod \"15639292-7397-4047-a813-75884683c2f9\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.047218 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-scripts\") pod \"15639292-7397-4047-a813-75884683c2f9\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.047272 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-run-httpd\") pod \"15639292-7397-4047-a813-75884683c2f9\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.047329 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-config-data\") pod \"15639292-7397-4047-a813-75884683c2f9\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.047382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c662\" (UniqueName: \"kubernetes.io/projected/15639292-7397-4047-a813-75884683c2f9-kube-api-access-2c662\") pod \"15639292-7397-4047-a813-75884683c2f9\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.047420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-log-httpd\") pod \"15639292-7397-4047-a813-75884683c2f9\" (UID: \"15639292-7397-4047-a813-75884683c2f9\") " Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.048765 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15639292-7397-4047-a813-75884683c2f9" (UID: "15639292-7397-4047-a813-75884683c2f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.049403 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.050921 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15639292-7397-4047-a813-75884683c2f9" (UID: "15639292-7397-4047-a813-75884683c2f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.061580 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-scripts" (OuterVolumeSpecName: "scripts") pod "15639292-7397-4047-a813-75884683c2f9" (UID: "15639292-7397-4047-a813-75884683c2f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.065033 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.080986 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15639292-7397-4047-a813-75884683c2f9-kube-api-access-2c662" (OuterVolumeSpecName: "kube-api-access-2c662") pod "15639292-7397-4047-a813-75884683c2f9" (UID: "15639292-7397-4047-a813-75884683c2f9"). InnerVolumeSpecName "kube-api-access-2c662". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.101265 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15639292-7397-4047-a813-75884683c2f9" (UID: "15639292-7397-4047-a813-75884683c2f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.118707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.151177 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.151513 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c662\" (UniqueName: \"kubernetes.io/projected/15639292-7397-4047-a813-75884683c2f9-kube-api-access-2c662\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.151525 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15639292-7397-4047-a813-75884683c2f9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.151535 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.247126 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-config-data" (OuterVolumeSpecName: "config-data") pod "15639292-7397-4047-a813-75884683c2f9" (UID: "15639292-7397-4047-a813-75884683c2f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.272977 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.333410 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15639292-7397-4047-a813-75884683c2f9" (UID: "15639292-7397-4047-a813-75884683c2f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.376271 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15639292-7397-4047-a813-75884683c2f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.408188 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zhg58"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.440676 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hns4w"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.766129 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ce78-account-create-update-597dz"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.797648 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2drzb"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.810183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15639292-7397-4047-a813-75884683c2f9","Type":"ContainerDied","Data":"0de017f9500bce8eb61431d758d246fdfafb3ea160230f4fa9e3ecd9af24a099"} Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.810241 4858 scope.go:117] "RemoveContainer" containerID="bdd6b29e6b736e37323282c84fe0e2e5d2ea65af773646740c9528543bdc3695" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.810369 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.811511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ce78-account-create-update-597dz" event={"ID":"7db9d22f-78a1-402c-abda-87f2f6fe1a3d","Type":"ContainerStarted","Data":"8baae94399d2f855c73358c3cc13c59c1f3e072612ff1c123b9409d7dfde9ddc"} Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.819532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zhg58" event={"ID":"63cacc1d-9f19-4bc5-aec0-93d97976666a","Type":"ContainerStarted","Data":"2329bced1af7a87ac45cdcf4efc6e92dac2e6eea9f56553643c19521cc365877"} Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.819603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zhg58" event={"ID":"63cacc1d-9f19-4bc5-aec0-93d97976666a","Type":"ContainerStarted","Data":"17127bccc4d5c7f621fd99feb226d8a60e4179f8e3ee2de277e43a83e6b939bf"} Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.831050 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hns4w" event={"ID":"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a","Type":"ContainerStarted","Data":"e74b260e6272aff3b80500e6fc6354a9ea5a7fef0b64ff0d6f6f8e8a2136bde4"} Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.831086 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hns4w" event={"ID":"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a","Type":"ContainerStarted","Data":"bd7a85d838f3c6f6d77cc6697ac10fa1faee9b9b3d0ce71412c5d617054abd8d"} Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.831100 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.831219 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.852506 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-zhg58" podStartSLOduration=1.8524784090000002 podStartE2EDuration="1.852478409s" podCreationTimestamp="2026-01-27 20:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:03.843120339 +0000 UTC m=+1408.550936035" watchObservedRunningTime="2026-01-27 20:31:03.852478409 +0000 UTC m=+1408.560294115" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.879376 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-hns4w" podStartSLOduration=1.8793524320000001 podStartE2EDuration="1.879352432s" podCreationTimestamp="2026-01-27 20:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:03.868263823 +0000 UTC m=+1408.576079529" watchObservedRunningTime="2026-01-27 20:31:03.879352432 +0000 UTC m=+1408.587168138" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.915449 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.927626 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.939629 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d500-account-create-update-nh98x"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.939717 4858 scope.go:117] "RemoveContainer" containerID="f5b64abe75acaf03720b3fabcdd60af806c0dc7d29867d4828e0f47276bd0ff3" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951000 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:03 crc kubenswrapper[4858]: E0127 20:31:03.951535 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="proxy-httpd" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951576 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="proxy-httpd" Jan 27 20:31:03 crc kubenswrapper[4858]: E0127 20:31:03.951595 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="sg-core" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951603 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="sg-core" Jan 27 20:31:03 crc kubenswrapper[4858]: E0127 20:31:03.951617 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-notification-agent" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951624 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-notification-agent" Jan 27 20:31:03 crc kubenswrapper[4858]: E0127 20:31:03.951653 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-central-agent" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951661 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-central-agent" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951876 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="sg-core" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951895 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-central-agent" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951905 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="ceilometer-notification-agent" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.951916 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="15639292-7397-4047-a813-75884683c2f9" containerName="proxy-httpd" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.953845 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.961174 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.968351 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:31:03 crc kubenswrapper[4858]: I0127 20:31:03.968594 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.031504 4858 scope.go:117] "RemoveContainer" containerID="4bdda2d757a0753d0a05c9bec69236de2b9bb2cd8d1f51adf24109219764a0cc" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.060264 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-48b0-account-create-update-gwf9m"] Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.089673 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15639292-7397-4047-a813-75884683c2f9" path="/var/lib/kubelet/pods/15639292-7397-4047-a813-75884683c2f9/volumes" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.090799 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.090837 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.098786 4858 scope.go:117] "RemoveContainer" containerID="fc81cdf8bad34c069d825b525d0e89d346a90d516750c9ecd5f9f2541be7daa2" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.106431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-log-httpd\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.106514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-scripts\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.106579 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.106634 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-config-data\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.106700 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-run-httpd\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.107125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.108064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jk9v\" (UniqueName: \"kubernetes.io/projected/58d9bd62-8b5f-4144-8322-86542d1268c8-kube-api-access-6jk9v\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.197659 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.210421 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-run-httpd\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.210595 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.210691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jk9v\" (UniqueName: \"kubernetes.io/projected/58d9bd62-8b5f-4144-8322-86542d1268c8-kube-api-access-6jk9v\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.210778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-log-httpd\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.210826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-scripts\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.210921 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.210996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-config-data\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.211101 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-run-httpd\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.212020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-log-httpd\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.220209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-config-data\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.221231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.222090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.225311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-scripts\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.245459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jk9v\" (UniqueName: \"kubernetes.io/projected/58d9bd62-8b5f-4144-8322-86542d1268c8-kube-api-access-6jk9v\") pod \"ceilometer-0\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.322295 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.853262 4858 generic.go:334] "Generic (PLEG): container finished" podID="27f669f0-2cb1-47cb-b220-758381725229" containerID="3f6ac5ceb752f3a0a892b9a9879c0652b4f812af34ad8dc8501d076dc168a8df" exitCode=0 Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.853411 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2drzb" event={"ID":"27f669f0-2cb1-47cb-b220-758381725229","Type":"ContainerDied","Data":"3f6ac5ceb752f3a0a892b9a9879c0652b4f812af34ad8dc8501d076dc168a8df"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.853719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2drzb" event={"ID":"27f669f0-2cb1-47cb-b220-758381725229","Type":"ContainerStarted","Data":"55482cf5fcbbaefdfd15bf934d9036a15b8d882dba9012c57e5ccf60e11f20c5"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.859304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d500-account-create-update-nh98x" event={"ID":"358dc567-a5bf-4e80-843c-dabb5e3535e2","Type":"ContainerStarted","Data":"21765a91083515e21d827dc54889afb0d4c75fbfb63a562393b6904329efe205"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.859405 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d500-account-create-update-nh98x" event={"ID":"358dc567-a5bf-4e80-843c-dabb5e3535e2","Type":"ContainerStarted","Data":"80ae1a0cde9649973e47d78435a87d4eb85bf99c9a60565fb2de626f1cd45f0a"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.867366 4858 generic.go:334] "Generic (PLEG): container finished" podID="7db9d22f-78a1-402c-abda-87f2f6fe1a3d" containerID="eb5100e4a93531ab529d3150eaa996104168f2c33232ff5df0d38f045f7ba3d6" exitCode=0 Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.867452 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ce78-account-create-update-597dz" event={"ID":"7db9d22f-78a1-402c-abda-87f2f6fe1a3d","Type":"ContainerDied","Data":"eb5100e4a93531ab529d3150eaa996104168f2c33232ff5df0d38f045f7ba3d6"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.869622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" event={"ID":"90043c41-134e-47fc-9086-1c45a761a7c0","Type":"ContainerStarted","Data":"e6d409e57f83c08b66daf0b6b2a005d52a8c2b300559ee15dbf01d93994e5be8"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.869648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" event={"ID":"90043c41-134e-47fc-9086-1c45a761a7c0","Type":"ContainerStarted","Data":"f71a6c8c7d8963f4dc296268e862159d12db5883167cea3b0706c27cfb9bba18"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.872477 4858 generic.go:334] "Generic (PLEG): container finished" podID="63cacc1d-9f19-4bc5-aec0-93d97976666a" containerID="2329bced1af7a87ac45cdcf4efc6e92dac2e6eea9f56553643c19521cc365877" exitCode=0 Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.872675 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zhg58" event={"ID":"63cacc1d-9f19-4bc5-aec0-93d97976666a","Type":"ContainerDied","Data":"2329bced1af7a87ac45cdcf4efc6e92dac2e6eea9f56553643c19521cc365877"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.883701 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a" containerID="e74b260e6272aff3b80500e6fc6354a9ea5a7fef0b64ff0d6f6f8e8a2136bde4" exitCode=0 Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.883878 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hns4w" event={"ID":"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a","Type":"ContainerDied","Data":"e74b260e6272aff3b80500e6fc6354a9ea5a7fef0b64ff0d6f6f8e8a2136bde4"} Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.920659 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.923716 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-d500-account-create-update-nh98x" podStartSLOduration=2.923689757 podStartE2EDuration="2.923689757s" podCreationTimestamp="2026-01-27 20:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:04.90120026 +0000 UTC m=+1409.609015986" watchObservedRunningTime="2026-01-27 20:31:04.923689757 +0000 UTC m=+1409.631505463" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.959158 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:04 crc kubenswrapper[4858]: I0127 20:31:04.987383 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" podStartSLOduration=2.98735486 podStartE2EDuration="2.98735486s" podCreationTimestamp="2026-01-27 20:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:04.976505458 +0000 UTC m=+1409.684321184" watchObservedRunningTime="2026-01-27 20:31:04.98735486 +0000 UTC m=+1409.695170566" Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.903053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerStarted","Data":"b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921"} Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.905311 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerStarted","Data":"3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09"} Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.905443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerStarted","Data":"3f8c4d1e83817b984fcc9c8554d5cd2d88783415151ba06d4035c81d6a2b0b19"} Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.910081 4858 generic.go:334] "Generic (PLEG): container finished" podID="358dc567-a5bf-4e80-843c-dabb5e3535e2" containerID="21765a91083515e21d827dc54889afb0d4c75fbfb63a562393b6904329efe205" exitCode=0 Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.910256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d500-account-create-update-nh98x" event={"ID":"358dc567-a5bf-4e80-843c-dabb5e3535e2","Type":"ContainerDied","Data":"21765a91083515e21d827dc54889afb0d4c75fbfb63a562393b6904329efe205"} Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.913616 4858 generic.go:334] "Generic (PLEG): container finished" podID="90043c41-134e-47fc-9086-1c45a761a7c0" containerID="e6d409e57f83c08b66daf0b6b2a005d52a8c2b300559ee15dbf01d93994e5be8" exitCode=0 Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.913847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" event={"ID":"90043c41-134e-47fc-9086-1c45a761a7c0","Type":"ContainerDied","Data":"e6d409e57f83c08b66daf0b6b2a005d52a8c2b300559ee15dbf01d93994e5be8"} Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.914201 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:31:05 crc kubenswrapper[4858]: I0127 20:31:05.914251 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.306874 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.588144 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.603283 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.657183 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.700766 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ms9s\" (UniqueName: \"kubernetes.io/projected/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-kube-api-access-7ms9s\") pod \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.700995 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg8wr\" (UniqueName: \"kubernetes.io/projected/27f669f0-2cb1-47cb-b220-758381725229-kube-api-access-cg8wr\") pod \"27f669f0-2cb1-47cb-b220-758381725229\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.701216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-operator-scripts\") pod \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\" (UID: \"7db9d22f-78a1-402c-abda-87f2f6fe1a3d\") " Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.701312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27f669f0-2cb1-47cb-b220-758381725229-operator-scripts\") pod \"27f669f0-2cb1-47cb-b220-758381725229\" (UID: \"27f669f0-2cb1-47cb-b220-758381725229\") " Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.702918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f669f0-2cb1-47cb-b220-758381725229-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27f669f0-2cb1-47cb-b220-758381725229" (UID: "27f669f0-2cb1-47cb-b220-758381725229"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.705176 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7db9d22f-78a1-402c-abda-87f2f6fe1a3d" (UID: "7db9d22f-78a1-402c-abda-87f2f6fe1a3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.723840 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f669f0-2cb1-47cb-b220-758381725229-kube-api-access-cg8wr" (OuterVolumeSpecName: "kube-api-access-cg8wr") pod "27f669f0-2cb1-47cb-b220-758381725229" (UID: "27f669f0-2cb1-47cb-b220-758381725229"). InnerVolumeSpecName "kube-api-access-cg8wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.745742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-kube-api-access-7ms9s" (OuterVolumeSpecName: "kube-api-access-7ms9s") pod "7db9d22f-78a1-402c-abda-87f2f6fe1a3d" (UID: "7db9d22f-78a1-402c-abda-87f2f6fe1a3d"). InnerVolumeSpecName "kube-api-access-7ms9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.808762 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg8wr\" (UniqueName: \"kubernetes.io/projected/27f669f0-2cb1-47cb-b220-758381725229-kube-api-access-cg8wr\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.808795 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.808806 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27f669f0-2cb1-47cb-b220-758381725229-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.808817 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ms9s\" (UniqueName: \"kubernetes.io/projected/7db9d22f-78a1-402c-abda-87f2f6fe1a3d-kube-api-access-7ms9s\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.832102 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.891609 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.929824 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l82jp\" (UniqueName: \"kubernetes.io/projected/63cacc1d-9f19-4bc5-aec0-93d97976666a-kube-api-access-l82jp\") pod \"63cacc1d-9f19-4bc5-aec0-93d97976666a\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.930088 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cacc1d-9f19-4bc5-aec0-93d97976666a-operator-scripts\") pod \"63cacc1d-9f19-4bc5-aec0-93d97976666a\" (UID: \"63cacc1d-9f19-4bc5-aec0-93d97976666a\") " Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.942877 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63cacc1d-9f19-4bc5-aec0-93d97976666a-kube-api-access-l82jp" (OuterVolumeSpecName: "kube-api-access-l82jp") pod "63cacc1d-9f19-4bc5-aec0-93d97976666a" (UID: "63cacc1d-9f19-4bc5-aec0-93d97976666a"). InnerVolumeSpecName "kube-api-access-l82jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:06 crc kubenswrapper[4858]: I0127 20:31:06.954931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63cacc1d-9f19-4bc5-aec0-93d97976666a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63cacc1d-9f19-4bc5-aec0-93d97976666a" (UID: "63cacc1d-9f19-4bc5-aec0-93d97976666a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.002292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2drzb" event={"ID":"27f669f0-2cb1-47cb-b220-758381725229","Type":"ContainerDied","Data":"55482cf5fcbbaefdfd15bf934d9036a15b8d882dba9012c57e5ccf60e11f20c5"} Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.002372 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55482cf5fcbbaefdfd15bf934d9036a15b8d882dba9012c57e5ccf60e11f20c5" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.002497 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2drzb" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.032506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-654c2\" (UniqueName: \"kubernetes.io/projected/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-kube-api-access-654c2\") pod \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.032650 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerStarted","Data":"88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38"} Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.032799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-operator-scripts\") pod \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\" (UID: \"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a\") " Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.033193 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63cacc1d-9f19-4bc5-aec0-93d97976666a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.033207 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l82jp\" (UniqueName: \"kubernetes.io/projected/63cacc1d-9f19-4bc5-aec0-93d97976666a-kube-api-access-l82jp\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.033590 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a" (UID: "8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.040800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-kube-api-access-654c2" (OuterVolumeSpecName: "kube-api-access-654c2") pod "8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a" (UID: "8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a"). InnerVolumeSpecName "kube-api-access-654c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.048076 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ce78-account-create-update-597dz" event={"ID":"7db9d22f-78a1-402c-abda-87f2f6fe1a3d","Type":"ContainerDied","Data":"8baae94399d2f855c73358c3cc13c59c1f3e072612ff1c123b9409d7dfde9ddc"} Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.048142 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8baae94399d2f855c73358c3cc13c59c1f3e072612ff1c123b9409d7dfde9ddc" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.048234 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce78-account-create-update-597dz" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.052349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zhg58" event={"ID":"63cacc1d-9f19-4bc5-aec0-93d97976666a","Type":"ContainerDied","Data":"17127bccc4d5c7f621fd99feb226d8a60e4179f8e3ee2de277e43a83e6b939bf"} Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.052389 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zhg58" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.052419 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17127bccc4d5c7f621fd99feb226d8a60e4179f8e3ee2de277e43a83e6b939bf" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.058975 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hns4w" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.066964 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hns4w" event={"ID":"8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a","Type":"ContainerDied","Data":"bd7a85d838f3c6f6d77cc6697ac10fa1faee9b9b3d0ce71412c5d617054abd8d"} Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.067040 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd7a85d838f3c6f6d77cc6697ac10fa1faee9b9b3d0ce71412c5d617054abd8d" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.135191 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.135219 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-654c2\" (UniqueName: \"kubernetes.io/projected/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a-kube-api-access-654c2\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.469738 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.557257 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg7zv\" (UniqueName: \"kubernetes.io/projected/90043c41-134e-47fc-9086-1c45a761a7c0-kube-api-access-lg7zv\") pod \"90043c41-134e-47fc-9086-1c45a761a7c0\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.558234 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90043c41-134e-47fc-9086-1c45a761a7c0-operator-scripts\") pod \"90043c41-134e-47fc-9086-1c45a761a7c0\" (UID: \"90043c41-134e-47fc-9086-1c45a761a7c0\") " Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.559918 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90043c41-134e-47fc-9086-1c45a761a7c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90043c41-134e-47fc-9086-1c45a761a7c0" (UID: "90043c41-134e-47fc-9086-1c45a761a7c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.576130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90043c41-134e-47fc-9086-1c45a761a7c0-kube-api-access-lg7zv" (OuterVolumeSpecName: "kube-api-access-lg7zv") pod "90043c41-134e-47fc-9086-1c45a761a7c0" (UID: "90043c41-134e-47fc-9086-1c45a761a7c0"). InnerVolumeSpecName "kube-api-access-lg7zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.618017 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.650068 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.660138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358dc567-a5bf-4e80-843c-dabb5e3535e2-operator-scripts\") pod \"358dc567-a5bf-4e80-843c-dabb5e3535e2\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.660328 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvr9w\" (UniqueName: \"kubernetes.io/projected/358dc567-a5bf-4e80-843c-dabb5e3535e2-kube-api-access-wvr9w\") pod \"358dc567-a5bf-4e80-843c-dabb5e3535e2\" (UID: \"358dc567-a5bf-4e80-843c-dabb5e3535e2\") " Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.660886 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg7zv\" (UniqueName: \"kubernetes.io/projected/90043c41-134e-47fc-9086-1c45a761a7c0-kube-api-access-lg7zv\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.660906 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90043c41-134e-47fc-9086-1c45a761a7c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.660964 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358dc567-a5bf-4e80-843c-dabb5e3535e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "358dc567-a5bf-4e80-843c-dabb5e3535e2" (UID: "358dc567-a5bf-4e80-843c-dabb5e3535e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.672827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358dc567-a5bf-4e80-843c-dabb5e3535e2-kube-api-access-wvr9w" (OuterVolumeSpecName: "kube-api-access-wvr9w") pod "358dc567-a5bf-4e80-843c-dabb5e3535e2" (UID: "358dc567-a5bf-4e80-843c-dabb5e3535e2"). InnerVolumeSpecName "kube-api-access-wvr9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.763330 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/358dc567-a5bf-4e80-843c-dabb5e3535e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:07 crc kubenswrapper[4858]: I0127 20:31:07.763374 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvr9w\" (UniqueName: \"kubernetes.io/projected/358dc567-a5bf-4e80-843c-dabb5e3535e2-kube-api-access-wvr9w\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:08 crc kubenswrapper[4858]: I0127 20:31:08.070678 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d500-account-create-update-nh98x" Jan 27 20:31:08 crc kubenswrapper[4858]: I0127 20:31:08.072615 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" Jan 27 20:31:08 crc kubenswrapper[4858]: I0127 20:31:08.083257 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d500-account-create-update-nh98x" event={"ID":"358dc567-a5bf-4e80-843c-dabb5e3535e2","Type":"ContainerDied","Data":"80ae1a0cde9649973e47d78435a87d4eb85bf99c9a60565fb2de626f1cd45f0a"} Jan 27 20:31:08 crc kubenswrapper[4858]: I0127 20:31:08.083318 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ae1a0cde9649973e47d78435a87d4eb85bf99c9a60565fb2de626f1cd45f0a" Jan 27 20:31:08 crc kubenswrapper[4858]: I0127 20:31:08.083333 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-48b0-account-create-update-gwf9m" event={"ID":"90043c41-134e-47fc-9086-1c45a761a7c0","Type":"ContainerDied","Data":"f71a6c8c7d8963f4dc296268e862159d12db5883167cea3b0706c27cfb9bba18"} Jan 27 20:31:08 crc kubenswrapper[4858]: I0127 20:31:08.083347 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f71a6c8c7d8963f4dc296268e862159d12db5883167cea3b0706c27cfb9bba18" Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.096419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerStarted","Data":"37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9"} Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.096657 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-central-agent" containerID="cri-o://3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09" gracePeriod=30 Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.096704 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="proxy-httpd" containerID="cri-o://37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9" gracePeriod=30 Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.096720 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="sg-core" containerID="cri-o://88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38" gracePeriod=30 Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.096732 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-notification-agent" containerID="cri-o://b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921" gracePeriod=30 Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.097088 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.128524 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.680768692 podStartE2EDuration="6.128500016s" podCreationTimestamp="2026-01-27 20:31:03 +0000 UTC" firstStartedPulling="2026-01-27 20:31:04.928750663 +0000 UTC m=+1409.636566369" lastFinishedPulling="2026-01-27 20:31:08.376481987 +0000 UTC m=+1413.084297693" observedRunningTime="2026-01-27 20:31:09.121825194 +0000 UTC m=+1413.829640910" watchObservedRunningTime="2026-01-27 20:31:09.128500016 +0000 UTC m=+1413.836315722" Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.149624 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.149708 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.184428 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:09 crc kubenswrapper[4858]: I0127 20:31:09.206670 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:09 crc kubenswrapper[4858]: E0127 20:31:09.835302 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58d9bd62_8b5f_4144_8322_86542d1268c8.slice/crio-conmon-b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921.scope\": RecentStats: unable to find data in memory cache]" Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.114969 4858 generic.go:334] "Generic (PLEG): container finished" podID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerID="37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9" exitCode=0 Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.115018 4858 generic.go:334] "Generic (PLEG): container finished" podID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerID="88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38" exitCode=2 Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.115028 4858 generic.go:334] "Generic (PLEG): container finished" podID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerID="b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921" exitCode=0 Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.115055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerDied","Data":"37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9"} Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.115127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerDied","Data":"88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38"} Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.115139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerDied","Data":"b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921"} Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.115556 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:10 crc kubenswrapper[4858]: I0127 20:31:10.115600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.212604 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.213191 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.221294 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.801787 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.964926 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qzc9m"] Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965365 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f669f0-2cb1-47cb-b220-758381725229" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965382 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f669f0-2cb1-47cb-b220-758381725229" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965404 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-notification-agent" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965411 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-notification-agent" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965422 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db9d22f-78a1-402c-abda-87f2f6fe1a3d" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965428 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db9d22f-78a1-402c-abda-87f2f6fe1a3d" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965440 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-central-agent" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965446 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-central-agent" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965462 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965468 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965482 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63cacc1d-9f19-4bc5-aec0-93d97976666a" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965489 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="63cacc1d-9f19-4bc5-aec0-93d97976666a" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965500 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90043c41-134e-47fc-9086-1c45a761a7c0" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965506 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="90043c41-134e-47fc-9086-1c45a761a7c0" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965515 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="sg-core" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965521 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="sg-core" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965527 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="proxy-httpd" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965534 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="proxy-httpd" Jan 27 20:31:12 crc kubenswrapper[4858]: E0127 20:31:12.965543 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358dc567-a5bf-4e80-843c-dabb5e3535e2" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965569 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="358dc567-a5bf-4e80-843c-dabb5e3535e2" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965753 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="sg-core" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965768 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db9d22f-78a1-402c-abda-87f2f6fe1a3d" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965784 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-notification-agent" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965791 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f669f0-2cb1-47cb-b220-758381725229" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965801 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="358dc567-a5bf-4e80-843c-dabb5e3535e2" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965812 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965824 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="90043c41-134e-47fc-9086-1c45a761a7c0" containerName="mariadb-account-create-update" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965832 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="ceilometer-central-agent" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965843 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="63cacc1d-9f19-4bc5-aec0-93d97976666a" containerName="mariadb-database-create" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.965852 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerName="proxy-httpd" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.966641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-sg-core-conf-yaml\") pod \"58d9bd62-8b5f-4144-8322-86542d1268c8\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969154 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-config-data\") pod \"58d9bd62-8b5f-4144-8322-86542d1268c8\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jk9v\" (UniqueName: \"kubernetes.io/projected/58d9bd62-8b5f-4144-8322-86542d1268c8-kube-api-access-6jk9v\") pod \"58d9bd62-8b5f-4144-8322-86542d1268c8\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969281 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-combined-ca-bundle\") pod \"58d9bd62-8b5f-4144-8322-86542d1268c8\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969495 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-run-httpd\") pod \"58d9bd62-8b5f-4144-8322-86542d1268c8\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969563 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-scripts\") pod \"58d9bd62-8b5f-4144-8322-86542d1268c8\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-log-httpd\") pod \"58d9bd62-8b5f-4144-8322-86542d1268c8\" (UID: \"58d9bd62-8b5f-4144-8322-86542d1268c8\") " Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.969875 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.970653 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "58d9bd62-8b5f-4144-8322-86542d1268c8" (UID: "58d9bd62-8b5f-4144-8322-86542d1268c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.970724 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "58d9bd62-8b5f-4144-8322-86542d1268c8" (UID: "58d9bd62-8b5f-4144-8322-86542d1268c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.970943 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.972096 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7ctzz" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.986684 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-scripts" (OuterVolumeSpecName: "scripts") pod "58d9bd62-8b5f-4144-8322-86542d1268c8" (UID: "58d9bd62-8b5f-4144-8322-86542d1268c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.991002 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qzc9m"] Jan 27 20:31:12 crc kubenswrapper[4858]: I0127 20:31:12.991459 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d9bd62-8b5f-4144-8322-86542d1268c8-kube-api-access-6jk9v" (OuterVolumeSpecName: "kube-api-access-6jk9v") pod "58d9bd62-8b5f-4144-8322-86542d1268c8" (UID: "58d9bd62-8b5f-4144-8322-86542d1268c8"). InnerVolumeSpecName "kube-api-access-6jk9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.054116 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "58d9bd62-8b5f-4144-8322-86542d1268c8" (UID: "58d9bd62-8b5f-4144-8322-86542d1268c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.072909 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-config-data\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073226 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-scripts\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226rj\" (UniqueName: \"kubernetes.io/projected/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-kube-api-access-226rj\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073629 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073664 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jk9v\" (UniqueName: \"kubernetes.io/projected/58d9bd62-8b5f-4144-8322-86542d1268c8-kube-api-access-6jk9v\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073681 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073692 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.073703 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58d9bd62-8b5f-4144-8322-86542d1268c8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.125403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58d9bd62-8b5f-4144-8322-86542d1268c8" (UID: "58d9bd62-8b5f-4144-8322-86542d1268c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.145638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-config-data" (OuterVolumeSpecName: "config-data") pod "58d9bd62-8b5f-4144-8322-86542d1268c8" (UID: "58d9bd62-8b5f-4144-8322-86542d1268c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.151519 4858 generic.go:334] "Generic (PLEG): container finished" podID="58d9bd62-8b5f-4144-8322-86542d1268c8" containerID="3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09" exitCode=0 Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.151648 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.151652 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerDied","Data":"3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09"} Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.151706 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58d9bd62-8b5f-4144-8322-86542d1268c8","Type":"ContainerDied","Data":"3f8c4d1e83817b984fcc9c8554d5cd2d88783415151ba06d4035c81d6a2b0b19"} Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.151729 4858 scope.go:117] "RemoveContainer" containerID="37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.175467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-scripts\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.175515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226rj\" (UniqueName: \"kubernetes.io/projected/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-kube-api-access-226rj\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.175613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.175667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-config-data\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.175731 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.175746 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58d9bd62-8b5f-4144-8322-86542d1268c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.178007 4858 scope.go:117] "RemoveContainer" containerID="88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.182992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-scripts\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.184872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-config-data\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.191123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.199409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226rj\" (UniqueName: \"kubernetes.io/projected/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-kube-api-access-226rj\") pod \"nova-cell0-conductor-db-sync-qzc9m\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.204607 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.222725 4858 scope.go:117] "RemoveContainer" containerID="b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.226647 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.264852 4858 scope.go:117] "RemoveContainer" containerID="3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.279358 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.282354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.286616 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.291325 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.295287 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.382963 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-run-httpd\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.383028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-scripts\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.383050 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.383107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.383121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-log-httpd\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.383144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftbvx\" (UniqueName: \"kubernetes.io/projected/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-kube-api-access-ftbvx\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.383172 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-config-data\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.397840 4858 scope.go:117] "RemoveContainer" containerID="37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9" Jan 27 20:31:13 crc kubenswrapper[4858]: E0127 20:31:13.401291 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9\": container with ID starting with 37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9 not found: ID does not exist" containerID="37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.401350 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9"} err="failed to get container status \"37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9\": rpc error: code = NotFound desc = could not find container \"37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9\": container with ID starting with 37d18146c7ce4f2e6a1b833e943da691a237cb48ff8baec93fc76a05eca950e9 not found: ID does not exist" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.401382 4858 scope.go:117] "RemoveContainer" containerID="88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38" Jan 27 20:31:13 crc kubenswrapper[4858]: E0127 20:31:13.405702 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38\": container with ID starting with 88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38 not found: ID does not exist" containerID="88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.405766 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38"} err="failed to get container status \"88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38\": rpc error: code = NotFound desc = could not find container \"88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38\": container with ID starting with 88fe1af6a03e466c2c7afa274201de688c8f42879eb01e170f133ed2e7514d38 not found: ID does not exist" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.405818 4858 scope.go:117] "RemoveContainer" containerID="b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921" Jan 27 20:31:13 crc kubenswrapper[4858]: E0127 20:31:13.412702 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921\": container with ID starting with b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921 not found: ID does not exist" containerID="b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.412753 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921"} err="failed to get container status \"b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921\": rpc error: code = NotFound desc = could not find container \"b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921\": container with ID starting with b01c000e9fe9305d5ed03668b99294be5f7e83ba8af81acc45b81e80dd7d1921 not found: ID does not exist" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.412786 4858 scope.go:117] "RemoveContainer" containerID="3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09" Jan 27 20:31:13 crc kubenswrapper[4858]: E0127 20:31:13.417677 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09\": container with ID starting with 3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09 not found: ID does not exist" containerID="3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.417724 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09"} err="failed to get container status \"3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09\": rpc error: code = NotFound desc = could not find container \"3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09\": container with ID starting with 3ce084b49bcc6871f0d17e28e6fdc0dbb1bf7d32612ae4456db2fddde4ba7e09 not found: ID does not exist" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.454254 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.488162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-run-httpd\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.488236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-scripts\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.488261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.488306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.488326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-log-httpd\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.488355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftbvx\" (UniqueName: \"kubernetes.io/projected/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-kube-api-access-ftbvx\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.488393 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-config-data\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.490212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-log-httpd\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.490307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-run-httpd\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.504345 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.504425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-scripts\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.505498 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-config-data\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.507444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.530614 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftbvx\" (UniqueName: \"kubernetes.io/projected/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-kube-api-access-ftbvx\") pod \"ceilometer-0\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " pod="openstack/ceilometer-0" Jan 27 20:31:13 crc kubenswrapper[4858]: I0127 20:31:13.701930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:14 crc kubenswrapper[4858]: I0127 20:31:14.106460 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d9bd62-8b5f-4144-8322-86542d1268c8" path="/var/lib/kubelet/pods/58d9bd62-8b5f-4144-8322-86542d1268c8/volumes" Jan 27 20:31:14 crc kubenswrapper[4858]: I0127 20:31:14.182146 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qzc9m"] Jan 27 20:31:14 crc kubenswrapper[4858]: W0127 20:31:14.193174 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9d9977d_c6e5_4534_8e26_4da3b22c6cb8.slice/crio-341423238f3ff7f96d84b8d2109fec52cdb94e99007beeb24ad1bb1182b594c1 WatchSource:0}: Error finding container 341423238f3ff7f96d84b8d2109fec52cdb94e99007beeb24ad1bb1182b594c1: Status 404 returned error can't find the container with id 341423238f3ff7f96d84b8d2109fec52cdb94e99007beeb24ad1bb1182b594c1 Jan 27 20:31:14 crc kubenswrapper[4858]: I0127 20:31:14.292895 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:15 crc kubenswrapper[4858]: I0127 20:31:15.183213 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerStarted","Data":"825590107f216f0ce20e0059faf9159d2fad656018a7393e4352518151143098"} Jan 27 20:31:15 crc kubenswrapper[4858]: I0127 20:31:15.183781 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerStarted","Data":"a9b0de3ab899ca5b032cbc4534c544eea0076b54bd9022706a0973b7b64dd148"} Jan 27 20:31:15 crc kubenswrapper[4858]: I0127 20:31:15.183796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerStarted","Data":"9af7dc24e15f694667d3cac4e608f25820d936d296daffe9043bb7048f617802"} Jan 27 20:31:15 crc kubenswrapper[4858]: I0127 20:31:15.185562 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" event={"ID":"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8","Type":"ContainerStarted","Data":"341423238f3ff7f96d84b8d2109fec52cdb94e99007beeb24ad1bb1182b594c1"} Jan 27 20:31:16 crc kubenswrapper[4858]: I0127 20:31:16.209693 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerStarted","Data":"07c772a8a9764520c20272985a0433ffc0b804b00e61fbd4d98dbd12a6c946cf"} Jan 27 20:31:16 crc kubenswrapper[4858]: I0127 20:31:16.323082 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:16 crc kubenswrapper[4858]: I0127 20:31:16.472778 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:31:16 crc kubenswrapper[4858]: I0127 20:31:16.473392 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" containerID="cri-o://26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba" gracePeriod=30 Jan 27 20:31:17 crc kubenswrapper[4858]: I0127 20:31:17.223566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerStarted","Data":"e8bd995979787bcfbeada3f4fa49cee0da853a9d2d4dc9860bd16a586657817c"} Jan 27 20:31:17 crc kubenswrapper[4858]: I0127 20:31:17.223994 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-central-agent" containerID="cri-o://a9b0de3ab899ca5b032cbc4534c544eea0076b54bd9022706a0973b7b64dd148" gracePeriod=30 Jan 27 20:31:17 crc kubenswrapper[4858]: I0127 20:31:17.224044 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:31:17 crc kubenswrapper[4858]: I0127 20:31:17.224143 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="proxy-httpd" containerID="cri-o://e8bd995979787bcfbeada3f4fa49cee0da853a9d2d4dc9860bd16a586657817c" gracePeriod=30 Jan 27 20:31:17 crc kubenswrapper[4858]: I0127 20:31:17.224205 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="sg-core" containerID="cri-o://07c772a8a9764520c20272985a0433ffc0b804b00e61fbd4d98dbd12a6c946cf" gracePeriod=30 Jan 27 20:31:17 crc kubenswrapper[4858]: I0127 20:31:17.224274 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-notification-agent" containerID="cri-o://825590107f216f0ce20e0059faf9159d2fad656018a7393e4352518151143098" gracePeriod=30 Jan 27 20:31:17 crc kubenswrapper[4858]: I0127 20:31:17.262469 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.638258963 podStartE2EDuration="4.262432178s" podCreationTimestamp="2026-01-27 20:31:13 +0000 UTC" firstStartedPulling="2026-01-27 20:31:14.29981873 +0000 UTC m=+1419.007634436" lastFinishedPulling="2026-01-27 20:31:16.923991945 +0000 UTC m=+1421.631807651" observedRunningTime="2026-01-27 20:31:17.253356057 +0000 UTC m=+1421.961171763" watchObservedRunningTime="2026-01-27 20:31:17.262432178 +0000 UTC m=+1421.970247884" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.036592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.107973 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-custom-prometheus-ca\") pod \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.108305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql2lf\" (UniqueName: \"kubernetes.io/projected/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-kube-api-access-ql2lf\") pod \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.108334 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-config-data\") pod \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.108426 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-logs\") pod \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.108505 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-combined-ca-bundle\") pod \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\" (UID: \"99d9c559-c61f-4bc2-907b-af9f9be0ce1b\") " Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.116352 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-logs" (OuterVolumeSpecName: "logs") pod "99d9c559-c61f-4bc2-907b-af9f9be0ce1b" (UID: "99d9c559-c61f-4bc2-907b-af9f9be0ce1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.148996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-kube-api-access-ql2lf" (OuterVolumeSpecName: "kube-api-access-ql2lf") pod "99d9c559-c61f-4bc2-907b-af9f9be0ce1b" (UID: "99d9c559-c61f-4bc2-907b-af9f9be0ce1b"). InnerVolumeSpecName "kube-api-access-ql2lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.153881 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "99d9c559-c61f-4bc2-907b-af9f9be0ce1b" (UID: "99d9c559-c61f-4bc2-907b-af9f9be0ce1b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.159916 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99d9c559-c61f-4bc2-907b-af9f9be0ce1b" (UID: "99d9c559-c61f-4bc2-907b-af9f9be0ce1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.193801 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-config-data" (OuterVolumeSpecName: "config-data") pod "99d9c559-c61f-4bc2-907b-af9f9be0ce1b" (UID: "99d9c559-c61f-4bc2-907b-af9f9be0ce1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.214168 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql2lf\" (UniqueName: \"kubernetes.io/projected/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-kube-api-access-ql2lf\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.214211 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.214229 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.214239 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.214250 4858 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/99d9c559-c61f-4bc2-907b-af9f9be0ce1b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.249535 4858 generic.go:334] "Generic (PLEG): container finished" podID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerID="07c772a8a9764520c20272985a0433ffc0b804b00e61fbd4d98dbd12a6c946cf" exitCode=2 Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.249592 4858 generic.go:334] "Generic (PLEG): container finished" podID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerID="825590107f216f0ce20e0059faf9159d2fad656018a7393e4352518151143098" exitCode=0 Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.253102 4858 generic.go:334] "Generic (PLEG): container finished" podID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerID="26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba" exitCode=0 Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.253210 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.272025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerDied","Data":"07c772a8a9764520c20272985a0433ffc0b804b00e61fbd4d98dbd12a6c946cf"} Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.272074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerDied","Data":"825590107f216f0ce20e0059faf9159d2fad656018a7393e4352518151143098"} Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.272088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerDied","Data":"26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba"} Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.272105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"99d9c559-c61f-4bc2-907b-af9f9be0ce1b","Type":"ContainerDied","Data":"51a590607a0baf8c11d99fc07451727abefdfca06fdf6ae5d8a0c1436d9b24d3"} Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.272125 4858 scope.go:117] "RemoveContainer" containerID="26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.297250 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.315677 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.327730 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:31:18 crc kubenswrapper[4858]: E0127 20:31:18.328236 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.328250 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:18 crc kubenswrapper[4858]: E0127 20:31:18.328297 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.328305 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.328487 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.328502 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.328514 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.329246 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.331865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.354099 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.357271 4858 scope.go:117] "RemoveContainer" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.409058 4858 scope.go:117] "RemoveContainer" containerID="26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba" Jan 27 20:31:18 crc kubenswrapper[4858]: E0127 20:31:18.409756 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba\": container with ID starting with 26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba not found: ID does not exist" containerID="26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.409799 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba"} err="failed to get container status \"26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba\": rpc error: code = NotFound desc = could not find container \"26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba\": container with ID starting with 26dd29ca697d5eb74bc7a9a351007c651e1b5a9d5789601412dbabc73c2d32ba not found: ID does not exist" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.409821 4858 scope.go:117] "RemoveContainer" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" Jan 27 20:31:18 crc kubenswrapper[4858]: E0127 20:31:18.410131 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf\": container with ID starting with 7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf not found: ID does not exist" containerID="7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.410220 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf"} err="failed to get container status \"7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf\": rpc error: code = NotFound desc = could not find container \"7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf\": container with ID starting with 7aa258bfad971de2ab7658e1139538288d0fd8f4d00d9ee09a7de38a6a9010cf not found: ID does not exist" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.419596 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.419645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44bpd\" (UniqueName: \"kubernetes.io/projected/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-kube-api-access-44bpd\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.419692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-logs\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.419728 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.420109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.523222 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.523369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.523392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44bpd\" (UniqueName: \"kubernetes.io/projected/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-kube-api-access-44bpd\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.523434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-logs\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.523466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.524270 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-logs\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.532940 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.536137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.537407 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.540101 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44bpd\" (UniqueName: \"kubernetes.io/projected/7bcc0d9d-f611-4dd6-96ab-41df437ab21d-kube-api-access-44bpd\") pod \"watcher-decision-engine-0\" (UID: \"7bcc0d9d-f611-4dd6-96ab-41df437ab21d\") " pod="openstack/watcher-decision-engine-0" Jan 27 20:31:18 crc kubenswrapper[4858]: I0127 20:31:18.654784 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:19 crc kubenswrapper[4858]: I0127 20:31:19.204298 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 27 20:31:20 crc kubenswrapper[4858]: I0127 20:31:20.090233 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" path="/var/lib/kubelet/pods/99d9c559-c61f-4bc2-907b-af9f9be0ce1b/volumes" Jan 27 20:31:25 crc kubenswrapper[4858]: I0127 20:31:25.331220 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7bcc0d9d-f611-4dd6-96ab-41df437ab21d","Type":"ContainerStarted","Data":"8e1c66de3ef3cee4ecec64761544e987693d5fdde77e0f9cc7b239e15e341a0a"} Jan 27 20:31:26 crc kubenswrapper[4858]: I0127 20:31:26.344318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7bcc0d9d-f611-4dd6-96ab-41df437ab21d","Type":"ContainerStarted","Data":"43857a6ad87cc7297721ca6883b590230107136bd064b44c61bbf07082f8d956"} Jan 27 20:31:26 crc kubenswrapper[4858]: I0127 20:31:26.347001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" event={"ID":"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8","Type":"ContainerStarted","Data":"dcab2712707ca3ec52cd765d2937fd3cca53a446e8546db49beb61ebd167bc94"} Jan 27 20:31:26 crc kubenswrapper[4858]: I0127 20:31:26.369107 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=8.369086224 podStartE2EDuration="8.369086224s" podCreationTimestamp="2026-01-27 20:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:26.361869466 +0000 UTC m=+1431.069685192" watchObservedRunningTime="2026-01-27 20:31:26.369086224 +0000 UTC m=+1431.076901930" Jan 27 20:31:26 crc kubenswrapper[4858]: I0127 20:31:26.384074 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" podStartSLOduration=3.010929802 podStartE2EDuration="14.384052625s" podCreationTimestamp="2026-01-27 20:31:12 +0000 UTC" firstStartedPulling="2026-01-27 20:31:14.195450075 +0000 UTC m=+1418.903265781" lastFinishedPulling="2026-01-27 20:31:25.568572898 +0000 UTC m=+1430.276388604" observedRunningTime="2026-01-27 20:31:26.380348778 +0000 UTC m=+1431.088164494" watchObservedRunningTime="2026-01-27 20:31:26.384052625 +0000 UTC m=+1431.091868331" Jan 27 20:31:28 crc kubenswrapper[4858]: I0127 20:31:28.655600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:28 crc kubenswrapper[4858]: I0127 20:31:28.692305 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:29 crc kubenswrapper[4858]: I0127 20:31:29.381290 4858 generic.go:334] "Generic (PLEG): container finished" podID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerID="a9b0de3ab899ca5b032cbc4534c544eea0076b54bd9022706a0973b7b64dd148" exitCode=0 Jan 27 20:31:29 crc kubenswrapper[4858]: I0127 20:31:29.381498 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerDied","Data":"a9b0de3ab899ca5b032cbc4534c544eea0076b54bd9022706a0973b7b64dd148"} Jan 27 20:31:29 crc kubenswrapper[4858]: I0127 20:31:29.382025 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:29 crc kubenswrapper[4858]: I0127 20:31:29.415883 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 27 20:31:40 crc kubenswrapper[4858]: I0127 20:31:40.508571 4858 generic.go:334] "Generic (PLEG): container finished" podID="f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" containerID="dcab2712707ca3ec52cd765d2937fd3cca53a446e8546db49beb61ebd167bc94" exitCode=0 Jan 27 20:31:40 crc kubenswrapper[4858]: I0127 20:31:40.508677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" event={"ID":"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8","Type":"ContainerDied","Data":"dcab2712707ca3ec52cd765d2937fd3cca53a446e8546db49beb61ebd167bc94"} Jan 27 20:31:41 crc kubenswrapper[4858]: I0127 20:31:41.942161 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.061785 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle\") pod \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.061887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-scripts\") pod \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.062004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-226rj\" (UniqueName: \"kubernetes.io/projected/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-kube-api-access-226rj\") pod \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.062150 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-config-data\") pod \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.068912 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-scripts" (OuterVolumeSpecName: "scripts") pod "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" (UID: "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.083959 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-kube-api-access-226rj" (OuterVolumeSpecName: "kube-api-access-226rj") pod "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" (UID: "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8"). InnerVolumeSpecName "kube-api-access-226rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:42 crc kubenswrapper[4858]: E0127 20:31:42.092496 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle podName:f9d9977d-c6e5-4534-8e26-4da3b22c6cb8 nodeName:}" failed. No retries permitted until 2026-01-27 20:31:42.592458612 +0000 UTC m=+1447.300274338 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle") pod "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" (UID: "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8") : error deleting /var/lib/kubelet/pods/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8/volume-subpaths: remove /var/lib/kubelet/pods/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8/volume-subpaths: no such file or directory Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.095021 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-config-data" (OuterVolumeSpecName: "config-data") pod "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" (UID: "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.164915 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.164944 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-226rj\" (UniqueName: \"kubernetes.io/projected/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-kube-api-access-226rj\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.164955 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.532454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" event={"ID":"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8","Type":"ContainerDied","Data":"341423238f3ff7f96d84b8d2109fec52cdb94e99007beeb24ad1bb1182b594c1"} Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.532500 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="341423238f3ff7f96d84b8d2109fec52cdb94e99007beeb24ad1bb1182b594c1" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.532542 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qzc9m" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.643243 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:31:42 crc kubenswrapper[4858]: E0127 20:31:42.643767 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" containerName="nova-cell0-conductor-db-sync" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.643979 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" containerName="nova-cell0-conductor-db-sync" Jan 27 20:31:42 crc kubenswrapper[4858]: E0127 20:31:42.644014 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.644022 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:42 crc kubenswrapper[4858]: E0127 20:31:42.644046 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.644053 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.644281 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" containerName="nova-cell0-conductor-db-sync" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.644315 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d9c559-c61f-4bc2-907b-af9f9be0ce1b" containerName="watcher-decision-engine" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.644995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.662583 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.675043 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle\") pod \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\" (UID: \"f9d9977d-c6e5-4534-8e26-4da3b22c6cb8\") " Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.678920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" (UID: "f9d9977d-c6e5-4534-8e26-4da3b22c6cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.777569 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.777927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.778183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcw6n\" (UniqueName: \"kubernetes.io/projected/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-kube-api-access-lcw6n\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.778447 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.880128 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.880235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcw6n\" (UniqueName: \"kubernetes.io/projected/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-kube-api-access-lcw6n\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.880308 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.885163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.890155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.900842 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcw6n\" (UniqueName: \"kubernetes.io/projected/ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0-kube-api-access-lcw6n\") pod \"nova-cell0-conductor-0\" (UID: \"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0\") " pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:42 crc kubenswrapper[4858]: I0127 20:31:42.964939 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:43 crc kubenswrapper[4858]: I0127 20:31:43.449981 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 20:31:43 crc kubenswrapper[4858]: W0127 20:31:43.453284 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded5a0e2c_bf3a_47c8_aecd_be2cd0b426b0.slice/crio-a4e458026a2bfe4120809eff3b149cee3f59abe12f501272775b141317b92f3f WatchSource:0}: Error finding container a4e458026a2bfe4120809eff3b149cee3f59abe12f501272775b141317b92f3f: Status 404 returned error can't find the container with id a4e458026a2bfe4120809eff3b149cee3f59abe12f501272775b141317b92f3f Jan 27 20:31:43 crc kubenswrapper[4858]: I0127 20:31:43.562573 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0","Type":"ContainerStarted","Data":"a4e458026a2bfe4120809eff3b149cee3f59abe12f501272775b141317b92f3f"} Jan 27 20:31:43 crc kubenswrapper[4858]: I0127 20:31:43.707987 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 20:31:44 crc kubenswrapper[4858]: I0127 20:31:44.580128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0","Type":"ContainerStarted","Data":"8601cd84764b6fc1a5539355e3d809f14bc861e26489025870a9fb860f2d540b"} Jan 27 20:31:44 crc kubenswrapper[4858]: I0127 20:31:44.583245 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:44 crc kubenswrapper[4858]: I0127 20:31:44.611613 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.611592465 podStartE2EDuration="2.611592465s" podCreationTimestamp="2026-01-27 20:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:44.600452824 +0000 UTC m=+1449.308268580" watchObservedRunningTime="2026-01-27 20:31:44.611592465 +0000 UTC m=+1449.319408181" Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.650580 4858 generic.go:334] "Generic (PLEG): container finished" podID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerID="e8bd995979787bcfbeada3f4fa49cee0da853a9d2d4dc9860bd16a586657817c" exitCode=137 Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.652307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerDied","Data":"e8bd995979787bcfbeada3f4fa49cee0da853a9d2d4dc9860bd16a586657817c"} Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.768186 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.906275 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-config-data\") pod \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.906636 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-scripts\") pod \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.906744 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-combined-ca-bundle\") pod \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.906866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-sg-core-conf-yaml\") pod \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.906961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftbvx\" (UniqueName: \"kubernetes.io/projected/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-kube-api-access-ftbvx\") pod \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.907121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-log-httpd\") pod \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.907215 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-run-httpd\") pod \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\" (UID: \"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81\") " Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.908831 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" (UID: "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.910528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" (UID: "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.916607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-scripts" (OuterVolumeSpecName: "scripts") pod "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" (UID: "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.916685 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-kube-api-access-ftbvx" (OuterVolumeSpecName: "kube-api-access-ftbvx") pod "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" (UID: "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81"). InnerVolumeSpecName "kube-api-access-ftbvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.982846 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" (UID: "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:47 crc kubenswrapper[4858]: I0127 20:31:47.997781 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" (UID: "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.009949 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.009995 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.010010 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.010022 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftbvx\" (UniqueName: \"kubernetes.io/projected/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-kube-api-access-ftbvx\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.010034 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.010047 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.056504 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-config-data" (OuterVolumeSpecName: "config-data") pod "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" (UID: "4b892e8a-ca5b-4bb5-837c-75b5b8b28c81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.112778 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.672289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b892e8a-ca5b-4bb5-837c-75b5b8b28c81","Type":"ContainerDied","Data":"9af7dc24e15f694667d3cac4e608f25820d936d296daffe9043bb7048f617802"} Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.672388 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.672771 4858 scope.go:117] "RemoveContainer" containerID="e8bd995979787bcfbeada3f4fa49cee0da853a9d2d4dc9860bd16a586657817c" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.718680 4858 scope.go:117] "RemoveContainer" containerID="07c772a8a9764520c20272985a0433ffc0b804b00e61fbd4d98dbd12a6c946cf" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.720328 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.737380 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.751775 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:48 crc kubenswrapper[4858]: E0127 20:31:48.753387 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-notification-agent" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.753748 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-notification-agent" Jan 27 20:31:48 crc kubenswrapper[4858]: E0127 20:31:48.753789 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="proxy-httpd" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.753800 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="proxy-httpd" Jan 27 20:31:48 crc kubenswrapper[4858]: E0127 20:31:48.753930 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-central-agent" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.753949 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-central-agent" Jan 27 20:31:48 crc kubenswrapper[4858]: E0127 20:31:48.754055 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="sg-core" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.754070 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="sg-core" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.754399 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-notification-agent" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.754428 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="ceilometer-central-agent" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.754445 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="proxy-httpd" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.754464 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" containerName="sg-core" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.756903 4858 scope.go:117] "RemoveContainer" containerID="825590107f216f0ce20e0059faf9159d2fad656018a7393e4352518151143098" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.757825 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.762422 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.762668 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.764171 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.797382 4858 scope.go:117] "RemoveContainer" containerID="a9b0de3ab899ca5b032cbc4534c544eea0076b54bd9022706a0973b7b64dd148" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.929204 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.929312 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.929352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.929426 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-config-data\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.929478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5r4s\" (UniqueName: \"kubernetes.io/projected/a210a555-31ae-408f-800b-2441335f98e5-kube-api-access-f5r4s\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.929522 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-scripts\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:48 crc kubenswrapper[4858]: I0127 20:31:48.929573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.031252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-scripts\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.031340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.031386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.031428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.031452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.031496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-config-data\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.031538 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5r4s\" (UniqueName: \"kubernetes.io/projected/a210a555-31ae-408f-800b-2441335f98e5-kube-api-access-f5r4s\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.032744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-log-httpd\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.033091 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-run-httpd\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.039311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.040214 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-scripts\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.052273 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.052583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-config-data\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.057098 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5r4s\" (UniqueName: \"kubernetes.io/projected/a210a555-31ae-408f-800b-2441335f98e5-kube-api-access-f5r4s\") pod \"ceilometer-0\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.088391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.605006 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:31:49 crc kubenswrapper[4858]: I0127 20:31:49.684642 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerStarted","Data":"c1eb67cfc9459112deb4e3c36b81c72df2209a67f0c4018bbd6783497e82ab86"} Jan 27 20:31:50 crc kubenswrapper[4858]: I0127 20:31:50.086322 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b892e8a-ca5b-4bb5-837c-75b5b8b28c81" path="/var/lib/kubelet/pods/4b892e8a-ca5b-4bb5-837c-75b5b8b28c81/volumes" Jan 27 20:31:50 crc kubenswrapper[4858]: I0127 20:31:50.702776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerStarted","Data":"c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de"} Jan 27 20:31:50 crc kubenswrapper[4858]: I0127 20:31:50.703248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerStarted","Data":"5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4"} Jan 27 20:31:51 crc kubenswrapper[4858]: I0127 20:31:51.716111 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerStarted","Data":"58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9"} Jan 27 20:31:52 crc kubenswrapper[4858]: I0127 20:31:52.734997 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerStarted","Data":"50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad"} Jan 27 20:31:52 crc kubenswrapper[4858]: I0127 20:31:52.736012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:31:52 crc kubenswrapper[4858]: I0127 20:31:52.764206 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.988273738 podStartE2EDuration="4.764176222s" podCreationTimestamp="2026-01-27 20:31:48 +0000 UTC" firstStartedPulling="2026-01-27 20:31:49.616681181 +0000 UTC m=+1454.324496887" lastFinishedPulling="2026-01-27 20:31:52.392583665 +0000 UTC m=+1457.100399371" observedRunningTime="2026-01-27 20:31:52.756267985 +0000 UTC m=+1457.464083691" watchObservedRunningTime="2026-01-27 20:31:52.764176222 +0000 UTC m=+1457.471991928" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.023772 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.549226 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qzhms"] Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.550965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.554616 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.555118 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.588118 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qzhms"] Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.640727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-config-data\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.641538 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-scripts\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.641873 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.642078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvp2n\" (UniqueName: \"kubernetes.io/projected/95620ef2-3348-440f-b7f6-ddebaccc5f17-kube-api-access-mvp2n\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.744430 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-config-data\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.744539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-scripts\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.744596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.744647 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvp2n\" (UniqueName: \"kubernetes.io/projected/95620ef2-3348-440f-b7f6-ddebaccc5f17-kube-api-access-mvp2n\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.752024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-scripts\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.768285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-config-data\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.768338 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.769146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvp2n\" (UniqueName: \"kubernetes.io/projected/95620ef2-3348-440f-b7f6-ddebaccc5f17-kube-api-access-mvp2n\") pod \"nova-cell0-cell-mapping-qzhms\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.774876 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.777436 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.784541 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.792521 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.801880 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.806048 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.826949 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.846417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vfj8\" (UniqueName: \"kubernetes.io/projected/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-kube-api-access-8vfj8\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.846483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.846533 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.863821 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.882935 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.950896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.951041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vfj8\" (UniqueName: \"kubernetes.io/projected/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-kube-api-access-8vfj8\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.951082 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.951128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d0ccd3-af56-498e-8f5a-90f1f1930434-logs\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.951166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.951213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-config-data\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.951234 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrxg\" (UniqueName: \"kubernetes.io/projected/a1d0ccd3-af56-498e-8f5a-90f1f1930434-kube-api-access-njrxg\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.958854 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.979324 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.979519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:53 crc kubenswrapper[4858]: I0127 20:31:53.982487 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:53.998588 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.014382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vfj8\" (UniqueName: \"kubernetes.io/projected/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-kube-api-access-8vfj8\") pod \"nova-cell1-novncproxy-0\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73d504c-b79b-4ff9-aed9-b165698f1972-logs\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d0ccd3-af56-498e-8f5a-90f1f1930434-logs\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-config-data\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrxg\" (UniqueName: \"kubernetes.io/projected/a1d0ccd3-af56-498e-8f5a-90f1f1930434-kube-api-access-njrxg\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113424 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plg5k\" (UniqueName: \"kubernetes.io/projected/e73d504c-b79b-4ff9-aed9-b165698f1972-kube-api-access-plg5k\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.113534 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-config-data\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.138497 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d0ccd3-af56-498e-8f5a-90f1f1930434-logs\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.161643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-config-data\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.172591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.173516 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.189377 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.239412 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.246568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.246706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plg5k\" (UniqueName: \"kubernetes.io/projected/e73d504c-b79b-4ff9-aed9-b165698f1972-kube-api-access-plg5k\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.246772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-config-data\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.246843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73d504c-b79b-4ff9-aed9-b165698f1972-logs\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.247083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrxg\" (UniqueName: \"kubernetes.io/projected/a1d0ccd3-af56-498e-8f5a-90f1f1930434-kube-api-access-njrxg\") pod \"nova-api-0\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.249652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73d504c-b79b-4ff9-aed9-b165698f1972-logs\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.254751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.256191 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.259005 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-config-data\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.266837 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.287352 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.288009 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plg5k\" (UniqueName: \"kubernetes.io/projected/e73d504c-b79b-4ff9-aed9-b165698f1972-kube-api-access-plg5k\") pod \"nova-metadata-0\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.325629 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c97cfd99-mrcvv"] Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.327946 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.349321 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.349485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtzqn\" (UniqueName: \"kubernetes.io/projected/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-kube-api-access-dtzqn\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.349564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-config-data\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.384054 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c97cfd99-mrcvv"] Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.441796 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.465453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.465623 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-swift-storage-0\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.465713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-config\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.465825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtzqn\" (UniqueName: \"kubernetes.io/projected/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-kube-api-access-dtzqn\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.465848 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-svc\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.465926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-sb\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.465991 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-config-data\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.466054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdj4\" (UniqueName: \"kubernetes.io/projected/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-kube-api-access-xwdj4\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.466124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-nb\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.513410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.515473 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.523892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtzqn\" (UniqueName: \"kubernetes.io/projected/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-kube-api-access-dtzqn\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.527025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-config-data\") pod \"nova-scheduler-0\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.582995 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwdj4\" (UniqueName: \"kubernetes.io/projected/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-kube-api-access-xwdj4\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.583101 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-nb\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.583326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-swift-storage-0\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.583402 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-config\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.583525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-svc\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.583630 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-sb\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.614996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-sb\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.615994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-swift-storage-0\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.616688 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-config\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.623488 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-svc\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.631375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-nb\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.636717 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwdj4\" (UniqueName: \"kubernetes.io/projected/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-kube-api-access-xwdj4\") pod \"dnsmasq-dns-59c97cfd99-mrcvv\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.643807 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:31:54 crc kubenswrapper[4858]: I0127 20:31:54.700054 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.073187 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qzhms"] Jan 27 20:31:55 crc kubenswrapper[4858]: W0127 20:31:55.084899 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95620ef2_3348_440f_b7f6_ddebaccc5f17.slice/crio-98405033df7d141e2be8121da94a2556638dd2af66e7d2c195aa2a291c446fe4 WatchSource:0}: Error finding container 98405033df7d141e2be8121da94a2556638dd2af66e7d2c195aa2a291c446fe4: Status 404 returned error can't find the container with id 98405033df7d141e2be8121da94a2556638dd2af66e7d2c195aa2a291c446fe4 Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.110644 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.384856 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.516308 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:31:55 crc kubenswrapper[4858]: W0127 20:31:55.566059 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1d0ccd3_af56_498e_8f5a_90f1f1930434.slice/crio-bd0965942c642c769ad47b540a149938c8c52f52e3d294766b609818ebbeb8d1 WatchSource:0}: Error finding container bd0965942c642c769ad47b540a149938c8c52f52e3d294766b609818ebbeb8d1: Status 404 returned error can't find the container with id bd0965942c642c769ad47b540a149938c8c52f52e3d294766b609818ebbeb8d1 Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.649624 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cw7wg"] Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.651137 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.666417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.666903 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.688311 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cw7wg"] Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.714952 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.737897 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.738116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-scripts\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.738310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-config-data\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.738348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4xh6\" (UniqueName: \"kubernetes.io/projected/e945c5a3-9e91-4cde-923f-764261351ad1-kube-api-access-m4xh6\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.746976 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c97cfd99-mrcvv"] Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.840789 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.841497 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-scripts\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.841605 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-config-data\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.841634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4xh6\" (UniqueName: \"kubernetes.io/projected/e945c5a3-9e91-4cde-923f-764261351ad1-kube-api-access-m4xh6\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.848397 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.848535 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-config-data\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.849297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-scripts\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.855188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"27b1a5ae-7542-4c09-be54-e7b08eb9fb04","Type":"ContainerStarted","Data":"e077d9ba94123f497ea6d8eef6d33139bbe2c5348f74a89501be314acb99dd2f"} Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.863496 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4xh6\" (UniqueName: \"kubernetes.io/projected/e945c5a3-9e91-4cde-923f-764261351ad1-kube-api-access-m4xh6\") pod \"nova-cell1-conductor-db-sync-cw7wg\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.864001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qzhms" event={"ID":"95620ef2-3348-440f-b7f6-ddebaccc5f17","Type":"ContainerStarted","Data":"42b5c9a3d3c1a209123b95abf557fbb400dded326074f62802f9850e97be50d1"} Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.864062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qzhms" event={"ID":"95620ef2-3348-440f-b7f6-ddebaccc5f17","Type":"ContainerStarted","Data":"98405033df7d141e2be8121da94a2556638dd2af66e7d2c195aa2a291c446fe4"} Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.869040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1d0ccd3-af56-498e-8f5a-90f1f1930434","Type":"ContainerStarted","Data":"bd0965942c642c769ad47b540a149938c8c52f52e3d294766b609818ebbeb8d1"} Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.871909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3d726d5-b44e-4e5c-aca5-3d2014a5289b","Type":"ContainerStarted","Data":"95d84dc1779b52705624c38b475ce4c2626e9c55bf47e9feacc77a3be6f7506c"} Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.875093 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" event={"ID":"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06","Type":"ContainerStarted","Data":"d4f172c8684f496a471c80bc313be4d06f959176dadaf3bd3a1c2ff268a10b40"} Jan 27 20:31:55 crc kubenswrapper[4858]: I0127 20:31:55.884785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73d504c-b79b-4ff9-aed9-b165698f1972","Type":"ContainerStarted","Data":"91da976653ab2f9e1571cb8ac0c08c5d27e2c158a70d68b3f9d1eb6b52ab7e41"} Jan 27 20:31:56 crc kubenswrapper[4858]: I0127 20:31:55.999048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:31:56 crc kubenswrapper[4858]: I0127 20:31:56.184475 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qzhms" podStartSLOduration=3.184447206 podStartE2EDuration="3.184447206s" podCreationTimestamp="2026-01-27 20:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:55.892819261 +0000 UTC m=+1460.600634977" watchObservedRunningTime="2026-01-27 20:31:56.184447206 +0000 UTC m=+1460.892262912" Jan 27 20:31:56 crc kubenswrapper[4858]: I0127 20:31:56.765130 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cw7wg"] Jan 27 20:31:56 crc kubenswrapper[4858]: I0127 20:31:56.919111 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" event={"ID":"e945c5a3-9e91-4cde-923f-764261351ad1","Type":"ContainerStarted","Data":"fd29c88e1de54b66d05f204523eb17a4a0808c847452e91ba4d5dc19363eb623"} Jan 27 20:31:56 crc kubenswrapper[4858]: I0127 20:31:56.925726 4858 generic.go:334] "Generic (PLEG): container finished" podID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerID="bb8ccfac3658f5615cffd177ef35ab48029e6d2fc7b6ae4405485fd867dfc958" exitCode=0 Jan 27 20:31:56 crc kubenswrapper[4858]: I0127 20:31:56.926063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" event={"ID":"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06","Type":"ContainerDied","Data":"bb8ccfac3658f5615cffd177ef35ab48029e6d2fc7b6ae4405485fd867dfc958"} Jan 27 20:31:57 crc kubenswrapper[4858]: I0127 20:31:57.969737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" event={"ID":"e945c5a3-9e91-4cde-923f-764261351ad1","Type":"ContainerStarted","Data":"de4a1f38a9aa7dd445aa71a0de50952e01bd1913dab33b5530d157608d8b2746"} Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.011995 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" event={"ID":"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06","Type":"ContainerStarted","Data":"aa80df46c0396e332d14f09514bf939e5883b938c3a07f2ff030eb229f93fa94"} Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.012454 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.015336 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" podStartSLOduration=3.015288033 podStartE2EDuration="3.015288033s" podCreationTimestamp="2026-01-27 20:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:57.992805646 +0000 UTC m=+1462.700621352" watchObservedRunningTime="2026-01-27 20:31:58.015288033 +0000 UTC m=+1462.723103739" Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.055782 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.063400 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" podStartSLOduration=4.063368707 podStartE2EDuration="4.063368707s" podCreationTimestamp="2026-01-27 20:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:31:58.052610708 +0000 UTC m=+1462.760426424" watchObservedRunningTime="2026-01-27 20:31:58.063368707 +0000 UTC m=+1462.771184413" Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.108225 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.854498 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q77hx"] Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.857653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.904301 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-utilities\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.904389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-catalog-content\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.906782 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrghf\" (UniqueName: \"kubernetes.io/projected/63e193df-42fe-456b-b277-bf843975384c-kube-api-access-xrghf\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:58 crc kubenswrapper[4858]: I0127 20:31:58.941259 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q77hx"] Jan 27 20:31:59 crc kubenswrapper[4858]: I0127 20:31:59.011829 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-utilities\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:59 crc kubenswrapper[4858]: I0127 20:31:59.011902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-catalog-content\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:59 crc kubenswrapper[4858]: I0127 20:31:59.012447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-utilities\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:59 crc kubenswrapper[4858]: I0127 20:31:59.013970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrghf\" (UniqueName: \"kubernetes.io/projected/63e193df-42fe-456b-b277-bf843975384c-kube-api-access-xrghf\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:59 crc kubenswrapper[4858]: I0127 20:31:59.014178 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-catalog-content\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:59 crc kubenswrapper[4858]: I0127 20:31:59.052490 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrghf\" (UniqueName: \"kubernetes.io/projected/63e193df-42fe-456b-b277-bf843975384c-kube-api-access-xrghf\") pod \"redhat-operators-q77hx\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:31:59 crc kubenswrapper[4858]: I0127 20:31:59.196646 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:32:01 crc kubenswrapper[4858]: I0127 20:32:01.564438 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q77hx"] Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.067056 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73d504c-b79b-4ff9-aed9-b165698f1972","Type":"ContainerStarted","Data":"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.067376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73d504c-b79b-4ff9-aed9-b165698f1972","Type":"ContainerStarted","Data":"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.067305 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-metadata" containerID="cri-o://fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae" gracePeriod=30 Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.067225 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-log" containerID="cri-o://d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60" gracePeriod=30 Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.079719 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="27b1a5ae-7542-4c09-be54-e7b08eb9fb04" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48" gracePeriod=30 Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.089194 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"27b1a5ae-7542-4c09-be54-e7b08eb9fb04","Type":"ContainerStarted","Data":"4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.089252 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1d0ccd3-af56-498e-8f5a-90f1f1930434","Type":"ContainerStarted","Data":"c65f30a3ea83e25ebcce3fc9db865befec928e1585a0f9b0d932d33a710254fd"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.089272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1d0ccd3-af56-498e-8f5a-90f1f1930434","Type":"ContainerStarted","Data":"064f018074a9a27609186c6decb8d1ba1ec4d0306894ee46e1dad3caea37200d"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.091692 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3d726d5-b44e-4e5c-aca5-3d2014a5289b","Type":"ContainerStarted","Data":"11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.097441 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.534477473 podStartE2EDuration="9.097409231s" podCreationTimestamp="2026-01-27 20:31:53 +0000 UTC" firstStartedPulling="2026-01-27 20:31:55.39886463 +0000 UTC m=+1460.106680336" lastFinishedPulling="2026-01-27 20:32:00.961796388 +0000 UTC m=+1465.669612094" observedRunningTime="2026-01-27 20:32:02.092227862 +0000 UTC m=+1466.800043568" watchObservedRunningTime="2026-01-27 20:32:02.097409231 +0000 UTC m=+1466.805224937" Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.101866 4858 generic.go:334] "Generic (PLEG): container finished" podID="63e193df-42fe-456b-b277-bf843975384c" containerID="f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4" exitCode=0 Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.101913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77hx" event={"ID":"63e193df-42fe-456b-b277-bf843975384c","Type":"ContainerDied","Data":"f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.101940 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77hx" event={"ID":"63e193df-42fe-456b-b277-bf843975384c","Type":"ContainerStarted","Data":"e7c5e74fd28dc5976a9adb1835bc0485eca0b12d86f309ed7b54aeb28460d614"} Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.120954 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.88345 podStartE2EDuration="9.120928068s" podCreationTimestamp="2026-01-27 20:31:53 +0000 UTC" firstStartedPulling="2026-01-27 20:31:55.687529331 +0000 UTC m=+1460.395345037" lastFinishedPulling="2026-01-27 20:32:00.925007399 +0000 UTC m=+1465.632823105" observedRunningTime="2026-01-27 20:32:02.118065246 +0000 UTC m=+1466.825880952" watchObservedRunningTime="2026-01-27 20:32:02.120928068 +0000 UTC m=+1466.828743774" Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.141180 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.434595828 podStartE2EDuration="9.14115663s" podCreationTimestamp="2026-01-27 20:31:53 +0000 UTC" firstStartedPulling="2026-01-27 20:31:55.254249228 +0000 UTC m=+1459.962064934" lastFinishedPulling="2026-01-27 20:32:00.96081003 +0000 UTC m=+1465.668625736" observedRunningTime="2026-01-27 20:32:02.13383694 +0000 UTC m=+1466.841652656" watchObservedRunningTime="2026-01-27 20:32:02.14115663 +0000 UTC m=+1466.848972336" Jan 27 20:32:02 crc kubenswrapper[4858]: I0127 20:32:02.157360 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.804298851 podStartE2EDuration="9.157339136s" podCreationTimestamp="2026-01-27 20:31:53 +0000 UTC" firstStartedPulling="2026-01-27 20:31:55.571895082 +0000 UTC m=+1460.279710788" lastFinishedPulling="2026-01-27 20:32:00.924935367 +0000 UTC m=+1465.632751073" observedRunningTime="2026-01-27 20:32:02.155179144 +0000 UTC m=+1466.862994860" watchObservedRunningTime="2026-01-27 20:32:02.157339136 +0000 UTC m=+1466.865154842" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.059631 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.131794 4858 generic.go:334] "Generic (PLEG): container finished" podID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerID="fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae" exitCode=0 Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.132358 4858 generic.go:334] "Generic (PLEG): container finished" podID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerID="d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60" exitCode=143 Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.132090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73d504c-b79b-4ff9-aed9-b165698f1972","Type":"ContainerDied","Data":"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae"} Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.132707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73d504c-b79b-4ff9-aed9-b165698f1972","Type":"ContainerDied","Data":"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60"} Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.132722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e73d504c-b79b-4ff9-aed9-b165698f1972","Type":"ContainerDied","Data":"91da976653ab2f9e1571cb8ac0c08c5d27e2c158a70d68b3f9d1eb6b52ab7e41"} Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.132745 4858 scope.go:117] "RemoveContainer" containerID="fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.133703 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.162969 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plg5k\" (UniqueName: \"kubernetes.io/projected/e73d504c-b79b-4ff9-aed9-b165698f1972-kube-api-access-plg5k\") pod \"e73d504c-b79b-4ff9-aed9-b165698f1972\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.163168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-combined-ca-bundle\") pod \"e73d504c-b79b-4ff9-aed9-b165698f1972\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.163415 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-config-data\") pod \"e73d504c-b79b-4ff9-aed9-b165698f1972\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.163705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73d504c-b79b-4ff9-aed9-b165698f1972-logs\") pod \"e73d504c-b79b-4ff9-aed9-b165698f1972\" (UID: \"e73d504c-b79b-4ff9-aed9-b165698f1972\") " Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.165450 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e73d504c-b79b-4ff9-aed9-b165698f1972-logs" (OuterVolumeSpecName: "logs") pod "e73d504c-b79b-4ff9-aed9-b165698f1972" (UID: "e73d504c-b79b-4ff9-aed9-b165698f1972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.174356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73d504c-b79b-4ff9-aed9-b165698f1972-kube-api-access-plg5k" (OuterVolumeSpecName: "kube-api-access-plg5k") pod "e73d504c-b79b-4ff9-aed9-b165698f1972" (UID: "e73d504c-b79b-4ff9-aed9-b165698f1972"). InnerVolumeSpecName "kube-api-access-plg5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.194713 4858 scope.go:117] "RemoveContainer" containerID="d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.223452 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e73d504c-b79b-4ff9-aed9-b165698f1972" (UID: "e73d504c-b79b-4ff9-aed9-b165698f1972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.235701 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-config-data" (OuterVolumeSpecName: "config-data") pod "e73d504c-b79b-4ff9-aed9-b165698f1972" (UID: "e73d504c-b79b-4ff9-aed9-b165698f1972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.267503 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.267908 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e73d504c-b79b-4ff9-aed9-b165698f1972-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.267982 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plg5k\" (UniqueName: \"kubernetes.io/projected/e73d504c-b79b-4ff9-aed9-b165698f1972-kube-api-access-plg5k\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.268084 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e73d504c-b79b-4ff9-aed9-b165698f1972-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.339226 4858 scope.go:117] "RemoveContainer" containerID="fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae" Jan 27 20:32:03 crc kubenswrapper[4858]: E0127 20:32:03.339866 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae\": container with ID starting with fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae not found: ID does not exist" containerID="fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.339911 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae"} err="failed to get container status \"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae\": rpc error: code = NotFound desc = could not find container \"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae\": container with ID starting with fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae not found: ID does not exist" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.339940 4858 scope.go:117] "RemoveContainer" containerID="d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60" Jan 27 20:32:03 crc kubenswrapper[4858]: E0127 20:32:03.340393 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60\": container with ID starting with d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60 not found: ID does not exist" containerID="d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.340481 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60"} err="failed to get container status \"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60\": rpc error: code = NotFound desc = could not find container \"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60\": container with ID starting with d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60 not found: ID does not exist" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.340542 4858 scope.go:117] "RemoveContainer" containerID="fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.341004 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae"} err="failed to get container status \"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae\": rpc error: code = NotFound desc = could not find container \"fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae\": container with ID starting with fc2996ac02ccb77fb8ddb7c8ab48e90918a282724455bca1db8f1f010ebcc0ae not found: ID does not exist" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.341170 4858 scope.go:117] "RemoveContainer" containerID="d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.341603 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60"} err="failed to get container status \"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60\": rpc error: code = NotFound desc = could not find container \"d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60\": container with ID starting with d42898503a2815fd5616e4e88a60ca65cf94f088f9aa4e364bdcdc1f0dcb8d60 not found: ID does not exist" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.469781 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.486322 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.507217 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:03 crc kubenswrapper[4858]: E0127 20:32:03.507910 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-log" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.507935 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-log" Jan 27 20:32:03 crc kubenswrapper[4858]: E0127 20:32:03.507982 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-metadata" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.507990 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-metadata" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.508209 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-metadata" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.508232 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" containerName="nova-metadata-log" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.509645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.513466 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.513940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.527811 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.616768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869b63f2-a08c-46a5-ba10-a717b8d7dae4-logs\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.616881 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.617039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-config-data\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.617325 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2b49\" (UniqueName: \"kubernetes.io/projected/869b63f2-a08c-46a5-ba10-a717b8d7dae4-kube-api-access-t2b49\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.617398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.720091 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869b63f2-a08c-46a5-ba10-a717b8d7dae4-logs\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.720159 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.720229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-config-data\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.720327 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2b49\" (UniqueName: \"kubernetes.io/projected/869b63f2-a08c-46a5-ba10-a717b8d7dae4-kube-api-access-t2b49\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.720360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.722032 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869b63f2-a08c-46a5-ba10-a717b8d7dae4-logs\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.725440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.725792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.737315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-config-data\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.765133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2b49\" (UniqueName: \"kubernetes.io/projected/869b63f2-a08c-46a5-ba10-a717b8d7dae4-kube-api-access-t2b49\") pod \"nova-metadata-0\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " pod="openstack/nova-metadata-0" Jan 27 20:32:03 crc kubenswrapper[4858]: I0127 20:32:03.848177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.095823 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73d504c-b79b-4ff9-aed9-b165698f1972" path="/var/lib/kubelet/pods/e73d504c-b79b-4ff9-aed9-b165698f1972/volumes" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.183176 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77hx" event={"ID":"63e193df-42fe-456b-b277-bf843975384c","Type":"ContainerStarted","Data":"c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d"} Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.240463 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.494098 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.514572 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.515143 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.645234 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.645350 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.683780 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.703845 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.790294 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9655b799f-5tbtb"] Jan 27 20:32:04 crc kubenswrapper[4858]: I0127 20:32:04.791438 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerName="dnsmasq-dns" containerID="cri-o://ffd0e60daa9864a08f51d8ad17007f61b10112610052154c9ca2e049fabf4c14" gracePeriod=10 Jan 27 20:32:05 crc kubenswrapper[4858]: I0127 20:32:05.194801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"869b63f2-a08c-46a5-ba10-a717b8d7dae4","Type":"ContainerStarted","Data":"37cd611523aa48fc0082263b5ea476070082d16236d5788d9b5efc8a167edd2a"} Jan 27 20:32:05 crc kubenswrapper[4858]: I0127 20:32:05.233637 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 20:32:05 crc kubenswrapper[4858]: I0127 20:32:05.597845 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:05 crc kubenswrapper[4858]: I0127 20:32:05.597897 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.244882 4858 generic.go:334] "Generic (PLEG): container finished" podID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerID="ffd0e60daa9864a08f51d8ad17007f61b10112610052154c9ca2e049fabf4c14" exitCode=0 Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.245372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" event={"ID":"cee80805-8e2b-44cd-8a95-8d4cf21effcd","Type":"ContainerDied","Data":"ffd0e60daa9864a08f51d8ad17007f61b10112610052154c9ca2e049fabf4c14"} Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.248416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"869b63f2-a08c-46a5-ba10-a717b8d7dae4","Type":"ContainerStarted","Data":"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2"} Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.248567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"869b63f2-a08c-46a5-ba10-a717b8d7dae4","Type":"ContainerStarted","Data":"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3"} Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.264425 4858 generic.go:334] "Generic (PLEG): container finished" podID="63e193df-42fe-456b-b277-bf843975384c" containerID="c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d" exitCode=0 Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.264528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77hx" event={"ID":"63e193df-42fe-456b-b277-bf843975384c","Type":"ContainerDied","Data":"c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d"} Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.281774 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.28174259 podStartE2EDuration="3.28174259s" podCreationTimestamp="2026-01-27 20:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:06.277662042 +0000 UTC m=+1470.985477738" watchObservedRunningTime="2026-01-27 20:32:06.28174259 +0000 UTC m=+1470.989558296" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.378224 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.506507 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-config\") pod \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.506970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-swift-storage-0\") pod \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.507057 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9qnh\" (UniqueName: \"kubernetes.io/projected/cee80805-8e2b-44cd-8a95-8d4cf21effcd-kube-api-access-j9qnh\") pod \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.507137 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-svc\") pod \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.507176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-nb\") pod \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.507234 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-sb\") pod \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\" (UID: \"cee80805-8e2b-44cd-8a95-8d4cf21effcd\") " Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.574532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cee80805-8e2b-44cd-8a95-8d4cf21effcd" (UID: "cee80805-8e2b-44cd-8a95-8d4cf21effcd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.578290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-config" (OuterVolumeSpecName: "config") pod "cee80805-8e2b-44cd-8a95-8d4cf21effcd" (UID: "cee80805-8e2b-44cd-8a95-8d4cf21effcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.579585 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cee80805-8e2b-44cd-8a95-8d4cf21effcd" (UID: "cee80805-8e2b-44cd-8a95-8d4cf21effcd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.583210 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cee80805-8e2b-44cd-8a95-8d4cf21effcd" (UID: "cee80805-8e2b-44cd-8a95-8d4cf21effcd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.585874 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cee80805-8e2b-44cd-8a95-8d4cf21effcd" (UID: "cee80805-8e2b-44cd-8a95-8d4cf21effcd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.599177 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee80805-8e2b-44cd-8a95-8d4cf21effcd-kube-api-access-j9qnh" (OuterVolumeSpecName: "kube-api-access-j9qnh") pod "cee80805-8e2b-44cd-8a95-8d4cf21effcd" (UID: "cee80805-8e2b-44cd-8a95-8d4cf21effcd"). InnerVolumeSpecName "kube-api-access-j9qnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.609844 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.609889 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9qnh\" (UniqueName: \"kubernetes.io/projected/cee80805-8e2b-44cd-8a95-8d4cf21effcd-kube-api-access-j9qnh\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.609907 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.609921 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.609936 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:06 crc kubenswrapper[4858]: I0127 20:32:06.609948 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee80805-8e2b-44cd-8a95-8d4cf21effcd-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:07 crc kubenswrapper[4858]: I0127 20:32:07.286271 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" Jan 27 20:32:07 crc kubenswrapper[4858]: I0127 20:32:07.287969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" event={"ID":"cee80805-8e2b-44cd-8a95-8d4cf21effcd","Type":"ContainerDied","Data":"6416c62a7704bb999c112b791338bb988f62082ca518172cf2cc03bb31a24d31"} Jan 27 20:32:07 crc kubenswrapper[4858]: I0127 20:32:07.288049 4858 scope.go:117] "RemoveContainer" containerID="ffd0e60daa9864a08f51d8ad17007f61b10112610052154c9ca2e049fabf4c14" Jan 27 20:32:07 crc kubenswrapper[4858]: I0127 20:32:07.317315 4858 scope.go:117] "RemoveContainer" containerID="50d1f444878d784a29c1a232a50356fe61e400e278103e65afa4da821c8e9835" Jan 27 20:32:07 crc kubenswrapper[4858]: I0127 20:32:07.335616 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9655b799f-5tbtb"] Jan 27 20:32:07 crc kubenswrapper[4858]: I0127 20:32:07.354321 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9655b799f-5tbtb"] Jan 27 20:32:08 crc kubenswrapper[4858]: I0127 20:32:08.084280 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" path="/var/lib/kubelet/pods/cee80805-8e2b-44cd-8a95-8d4cf21effcd/volumes" Jan 27 20:32:08 crc kubenswrapper[4858]: I0127 20:32:08.301796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77hx" event={"ID":"63e193df-42fe-456b-b277-bf843975384c","Type":"ContainerStarted","Data":"2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001"} Jan 27 20:32:08 crc kubenswrapper[4858]: I0127 20:32:08.341805 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q77hx" podStartSLOduration=5.184506617 podStartE2EDuration="10.341775565s" podCreationTimestamp="2026-01-27 20:31:58 +0000 UTC" firstStartedPulling="2026-01-27 20:32:02.10432448 +0000 UTC m=+1466.812140196" lastFinishedPulling="2026-01-27 20:32:07.261593438 +0000 UTC m=+1471.969409144" observedRunningTime="2026-01-27 20:32:08.329229714 +0000 UTC m=+1473.037045420" watchObservedRunningTime="2026-01-27 20:32:08.341775565 +0000 UTC m=+1473.049591271" Jan 27 20:32:08 crc kubenswrapper[4858]: I0127 20:32:08.849120 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:32:08 crc kubenswrapper[4858]: I0127 20:32:08.849182 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:32:09 crc kubenswrapper[4858]: I0127 20:32:09.198086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:32:09 crc kubenswrapper[4858]: I0127 20:32:09.198157 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:32:09 crc kubenswrapper[4858]: I0127 20:32:09.315427 4858 generic.go:334] "Generic (PLEG): container finished" podID="95620ef2-3348-440f-b7f6-ddebaccc5f17" containerID="42b5c9a3d3c1a209123b95abf557fbb400dded326074f62802f9850e97be50d1" exitCode=0 Jan 27 20:32:09 crc kubenswrapper[4858]: I0127 20:32:09.315544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qzhms" event={"ID":"95620ef2-3348-440f-b7f6-ddebaccc5f17","Type":"ContainerDied","Data":"42b5c9a3d3c1a209123b95abf557fbb400dded326074f62802f9850e97be50d1"} Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.244889 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q77hx" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="registry-server" probeResult="failure" output=< Jan 27 20:32:10 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:32:10 crc kubenswrapper[4858]: > Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.748060 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.909191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvp2n\" (UniqueName: \"kubernetes.io/projected/95620ef2-3348-440f-b7f6-ddebaccc5f17-kube-api-access-mvp2n\") pod \"95620ef2-3348-440f-b7f6-ddebaccc5f17\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.909749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-config-data\") pod \"95620ef2-3348-440f-b7f6-ddebaccc5f17\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.909798 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-combined-ca-bundle\") pod \"95620ef2-3348-440f-b7f6-ddebaccc5f17\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.909889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-scripts\") pod \"95620ef2-3348-440f-b7f6-ddebaccc5f17\" (UID: \"95620ef2-3348-440f-b7f6-ddebaccc5f17\") " Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.916799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-scripts" (OuterVolumeSpecName: "scripts") pod "95620ef2-3348-440f-b7f6-ddebaccc5f17" (UID: "95620ef2-3348-440f-b7f6-ddebaccc5f17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.916970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95620ef2-3348-440f-b7f6-ddebaccc5f17-kube-api-access-mvp2n" (OuterVolumeSpecName: "kube-api-access-mvp2n") pod "95620ef2-3348-440f-b7f6-ddebaccc5f17" (UID: "95620ef2-3348-440f-b7f6-ddebaccc5f17"). InnerVolumeSpecName "kube-api-access-mvp2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.942865 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95620ef2-3348-440f-b7f6-ddebaccc5f17" (UID: "95620ef2-3348-440f-b7f6-ddebaccc5f17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:10 crc kubenswrapper[4858]: I0127 20:32:10.947695 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-config-data" (OuterVolumeSpecName: "config-data") pod "95620ef2-3348-440f-b7f6-ddebaccc5f17" (UID: "95620ef2-3348-440f-b7f6-ddebaccc5f17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.014057 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvp2n\" (UniqueName: \"kubernetes.io/projected/95620ef2-3348-440f-b7f6-ddebaccc5f17-kube-api-access-mvp2n\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.014106 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.014124 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.014139 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95620ef2-3348-440f-b7f6-ddebaccc5f17-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.269309 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9655b799f-5tbtb" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.183:5353: i/o timeout" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.340133 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qzhms" event={"ID":"95620ef2-3348-440f-b7f6-ddebaccc5f17","Type":"ContainerDied","Data":"98405033df7d141e2be8121da94a2556638dd2af66e7d2c195aa2a291c446fe4"} Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.340177 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98405033df7d141e2be8121da94a2556638dd2af66e7d2c195aa2a291c446fe4" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.340201 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qzhms" Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.527633 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.528681 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-log" containerID="cri-o://064f018074a9a27609186c6decb8d1ba1ec4d0306894ee46e1dad3caea37200d" gracePeriod=30 Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.529308 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-api" containerID="cri-o://c65f30a3ea83e25ebcce3fc9db865befec928e1585a0f9b0d932d33a710254fd" gracePeriod=30 Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.548351 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.548655 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d3d726d5-b44e-4e5c-aca5-3d2014a5289b" containerName="nova-scheduler-scheduler" containerID="cri-o://11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e" gracePeriod=30 Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.573672 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.573977 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-log" containerID="cri-o://21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3" gracePeriod=30 Jan 27 20:32:11 crc kubenswrapper[4858]: I0127 20:32:11.574642 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-metadata" containerID="cri-o://afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2" gracePeriod=30 Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.357415 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.361753 4858 generic.go:334] "Generic (PLEG): container finished" podID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerID="064f018074a9a27609186c6decb8d1ba1ec4d0306894ee46e1dad3caea37200d" exitCode=143 Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.361834 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1d0ccd3-af56-498e-8f5a-90f1f1930434","Type":"ContainerDied","Data":"064f018074a9a27609186c6decb8d1ba1ec4d0306894ee46e1dad3caea37200d"} Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.387591 4858 generic.go:334] "Generic (PLEG): container finished" podID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerID="afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2" exitCode=0 Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.387651 4858 generic.go:334] "Generic (PLEG): container finished" podID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerID="21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3" exitCode=143 Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.387681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"869b63f2-a08c-46a5-ba10-a717b8d7dae4","Type":"ContainerDied","Data":"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2"} Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.387738 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"869b63f2-a08c-46a5-ba10-a717b8d7dae4","Type":"ContainerDied","Data":"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3"} Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.387752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"869b63f2-a08c-46a5-ba10-a717b8d7dae4","Type":"ContainerDied","Data":"37cd611523aa48fc0082263b5ea476070082d16236d5788d9b5efc8a167edd2a"} Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.387772 4858 scope.go:117] "RemoveContainer" containerID="afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.387953 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.468738 4858 scope.go:117] "RemoveContainer" containerID="21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.486626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-config-data\") pod \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.486725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2b49\" (UniqueName: \"kubernetes.io/projected/869b63f2-a08c-46a5-ba10-a717b8d7dae4-kube-api-access-t2b49\") pod \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.486761 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-combined-ca-bundle\") pod \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.490309 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-nova-metadata-tls-certs\") pod \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.490592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869b63f2-a08c-46a5-ba10-a717b8d7dae4-logs\") pod \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\" (UID: \"869b63f2-a08c-46a5-ba10-a717b8d7dae4\") " Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.494528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869b63f2-a08c-46a5-ba10-a717b8d7dae4-logs" (OuterVolumeSpecName: "logs") pod "869b63f2-a08c-46a5-ba10-a717b8d7dae4" (UID: "869b63f2-a08c-46a5-ba10-a717b8d7dae4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.500078 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/869b63f2-a08c-46a5-ba10-a717b8d7dae4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.501126 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869b63f2-a08c-46a5-ba10-a717b8d7dae4-kube-api-access-t2b49" (OuterVolumeSpecName: "kube-api-access-t2b49") pod "869b63f2-a08c-46a5-ba10-a717b8d7dae4" (UID: "869b63f2-a08c-46a5-ba10-a717b8d7dae4"). InnerVolumeSpecName "kube-api-access-t2b49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.535839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-config-data" (OuterVolumeSpecName: "config-data") pod "869b63f2-a08c-46a5-ba10-a717b8d7dae4" (UID: "869b63f2-a08c-46a5-ba10-a717b8d7dae4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.575609 4858 scope.go:117] "RemoveContainer" containerID="afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2" Jan 27 20:32:12 crc kubenswrapper[4858]: E0127 20:32:12.580651 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2\": container with ID starting with afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2 not found: ID does not exist" containerID="afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.580715 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2"} err="failed to get container status \"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2\": rpc error: code = NotFound desc = could not find container \"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2\": container with ID starting with afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2 not found: ID does not exist" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.580750 4858 scope.go:117] "RemoveContainer" containerID="21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3" Jan 27 20:32:12 crc kubenswrapper[4858]: E0127 20:32:12.581258 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3\": container with ID starting with 21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3 not found: ID does not exist" containerID="21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.581374 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3"} err="failed to get container status \"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3\": rpc error: code = NotFound desc = could not find container \"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3\": container with ID starting with 21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3 not found: ID does not exist" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.581464 4858 scope.go:117] "RemoveContainer" containerID="afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.581881 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2"} err="failed to get container status \"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2\": rpc error: code = NotFound desc = could not find container \"afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2\": container with ID starting with afb227f8329d81830e11a53c0cea6e2cbc76eb0ead9cb689cb8b560392ced3f2 not found: ID does not exist" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.581970 4858 scope.go:117] "RemoveContainer" containerID="21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.582607 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3"} err="failed to get container status \"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3\": rpc error: code = NotFound desc = could not find container \"21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3\": container with ID starting with 21793aeee664ee6aaa5ae7a09a1e04db519894e4b91010855aba5ceed02a7be3 not found: ID does not exist" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.596761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "869b63f2-a08c-46a5-ba10-a717b8d7dae4" (UID: "869b63f2-a08c-46a5-ba10-a717b8d7dae4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.602264 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.602753 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2b49\" (UniqueName: \"kubernetes.io/projected/869b63f2-a08c-46a5-ba10-a717b8d7dae4-kube-api-access-t2b49\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.602885 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.602719 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "869b63f2-a08c-46a5-ba10-a717b8d7dae4" (UID: "869b63f2-a08c-46a5-ba10-a717b8d7dae4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.707541 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/869b63f2-a08c-46a5-ba10-a717b8d7dae4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.796891 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.808142 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.826378 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:12 crc kubenswrapper[4858]: E0127 20:32:12.826951 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-metadata" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.826974 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-metadata" Jan 27 20:32:12 crc kubenswrapper[4858]: E0127 20:32:12.827004 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerName="init" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827013 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerName="init" Jan 27 20:32:12 crc kubenswrapper[4858]: E0127 20:32:12.827025 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-log" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827034 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-log" Jan 27 20:32:12 crc kubenswrapper[4858]: E0127 20:32:12.827048 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95620ef2-3348-440f-b7f6-ddebaccc5f17" containerName="nova-manage" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827055 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="95620ef2-3348-440f-b7f6-ddebaccc5f17" containerName="nova-manage" Jan 27 20:32:12 crc kubenswrapper[4858]: E0127 20:32:12.827077 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerName="dnsmasq-dns" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827084 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerName="dnsmasq-dns" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827334 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="95620ef2-3348-440f-b7f6-ddebaccc5f17" containerName="nova-manage" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827360 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee80805-8e2b-44cd-8a95-8d4cf21effcd" containerName="dnsmasq-dns" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827375 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-metadata" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.827394 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" containerName="nova-metadata-log" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.831962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.836194 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.836503 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.842253 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.913123 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.913182 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1e9ddf6-0f93-4a84-a740-26192235747a-logs\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.913291 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-config-data\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.913314 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qpf\" (UniqueName: \"kubernetes.io/projected/e1e9ddf6-0f93-4a84-a740-26192235747a-kube-api-access-w5qpf\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.913420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:12 crc kubenswrapper[4858]: I0127 20:32:12.991101 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.015351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-config-data\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.015408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qpf\" (UniqueName: \"kubernetes.io/projected/e1e9ddf6-0f93-4a84-a740-26192235747a-kube-api-access-w5qpf\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.015568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.015687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.016271 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1e9ddf6-0f93-4a84-a740-26192235747a-logs\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.016400 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1e9ddf6-0f93-4a84-a740-26192235747a-logs\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.031780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.031877 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.038056 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qpf\" (UniqueName: \"kubernetes.io/projected/e1e9ddf6-0f93-4a84-a740-26192235747a-kube-api-access-w5qpf\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.039207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-config-data\") pod \"nova-metadata-0\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.117352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-config-data\") pod \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.117848 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-combined-ca-bundle\") pod \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.117970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtzqn\" (UniqueName: \"kubernetes.io/projected/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-kube-api-access-dtzqn\") pod \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\" (UID: \"d3d726d5-b44e-4e5c-aca5-3d2014a5289b\") " Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.123260 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-kube-api-access-dtzqn" (OuterVolumeSpecName: "kube-api-access-dtzqn") pod "d3d726d5-b44e-4e5c-aca5-3d2014a5289b" (UID: "d3d726d5-b44e-4e5c-aca5-3d2014a5289b"). InnerVolumeSpecName "kube-api-access-dtzqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.156321 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.158169 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3d726d5-b44e-4e5c-aca5-3d2014a5289b" (UID: "d3d726d5-b44e-4e5c-aca5-3d2014a5289b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.186111 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-config-data" (OuterVolumeSpecName: "config-data") pod "d3d726d5-b44e-4e5c-aca5-3d2014a5289b" (UID: "d3d726d5-b44e-4e5c-aca5-3d2014a5289b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.220722 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.220759 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.220775 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtzqn\" (UniqueName: \"kubernetes.io/projected/d3d726d5-b44e-4e5c-aca5-3d2014a5289b-kube-api-access-dtzqn\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.399695 4858 generic.go:334] "Generic (PLEG): container finished" podID="e945c5a3-9e91-4cde-923f-764261351ad1" containerID="de4a1f38a9aa7dd445aa71a0de50952e01bd1913dab33b5530d157608d8b2746" exitCode=0 Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.399783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" event={"ID":"e945c5a3-9e91-4cde-923f-764261351ad1","Type":"ContainerDied","Data":"de4a1f38a9aa7dd445aa71a0de50952e01bd1913dab33b5530d157608d8b2746"} Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.402726 4858 generic.go:334] "Generic (PLEG): container finished" podID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerID="c65f30a3ea83e25ebcce3fc9db865befec928e1585a0f9b0d932d33a710254fd" exitCode=0 Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.402781 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1d0ccd3-af56-498e-8f5a-90f1f1930434","Type":"ContainerDied","Data":"c65f30a3ea83e25ebcce3fc9db865befec928e1585a0f9b0d932d33a710254fd"} Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.406290 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3d726d5-b44e-4e5c-aca5-3d2014a5289b" containerID="11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e" exitCode=0 Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.406379 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.406372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3d726d5-b44e-4e5c-aca5-3d2014a5289b","Type":"ContainerDied","Data":"11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e"} Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.406458 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d3d726d5-b44e-4e5c-aca5-3d2014a5289b","Type":"ContainerDied","Data":"95d84dc1779b52705624c38b475ce4c2626e9c55bf47e9feacc77a3be6f7506c"} Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.406480 4858 scope.go:117] "RemoveContainer" containerID="11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.470628 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.485501 4858 scope.go:117] "RemoveContainer" containerID="11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e" Jan 27 20:32:13 crc kubenswrapper[4858]: E0127 20:32:13.488694 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e\": container with ID starting with 11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e not found: ID does not exist" containerID="11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.488754 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e"} err="failed to get container status \"11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e\": rpc error: code = NotFound desc = could not find container \"11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e\": container with ID starting with 11582dee94551a00a0cd6447c62d1d168f2f59152507f1d5fc44d0cb50fc219e not found: ID does not exist" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.527464 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.542774 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:13 crc kubenswrapper[4858]: E0127 20:32:13.543482 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3d726d5-b44e-4e5c-aca5-3d2014a5289b" containerName="nova-scheduler-scheduler" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.543502 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d726d5-b44e-4e5c-aca5-3d2014a5289b" containerName="nova-scheduler-scheduler" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.543812 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3d726d5-b44e-4e5c-aca5-3d2014a5289b" containerName="nova-scheduler-scheduler" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.544788 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.551160 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.555659 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.630804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-config-data\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.631351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.631823 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wql4h\" (UniqueName: \"kubernetes.io/projected/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-kube-api-access-wql4h\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: W0127 20:32:13.670510 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e9ddf6_0f93_4a84_a740_26192235747a.slice/crio-1d6ff1344a5603c1531fc2e6a66ff27337f606fb0cf735d85106f705d3085987 WatchSource:0}: Error finding container 1d6ff1344a5603c1531fc2e6a66ff27337f606fb0cf735d85106f705d3085987: Status 404 returned error can't find the container with id 1d6ff1344a5603c1531fc2e6a66ff27337f606fb0cf735d85106f705d3085987 Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.676053 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.734637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-config-data\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.734828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.734987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wql4h\" (UniqueName: \"kubernetes.io/projected/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-kube-api-access-wql4h\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.740600 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-config-data\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.740860 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.757183 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wql4h\" (UniqueName: \"kubernetes.io/projected/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-kube-api-access-wql4h\") pod \"nova-scheduler-0\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.875334 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:32:13 crc kubenswrapper[4858]: I0127 20:32:13.896696 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.040574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d0ccd3-af56-498e-8f5a-90f1f1930434-logs\") pod \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.041070 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-config-data\") pod \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.041133 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-combined-ca-bundle\") pod \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.041168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njrxg\" (UniqueName: \"kubernetes.io/projected/a1d0ccd3-af56-498e-8f5a-90f1f1930434-kube-api-access-njrxg\") pod \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\" (UID: \"a1d0ccd3-af56-498e-8f5a-90f1f1930434\") " Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.041739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1d0ccd3-af56-498e-8f5a-90f1f1930434-logs" (OuterVolumeSpecName: "logs") pod "a1d0ccd3-af56-498e-8f5a-90f1f1930434" (UID: "a1d0ccd3-af56-498e-8f5a-90f1f1930434"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.046871 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d0ccd3-af56-498e-8f5a-90f1f1930434-kube-api-access-njrxg" (OuterVolumeSpecName: "kube-api-access-njrxg") pod "a1d0ccd3-af56-498e-8f5a-90f1f1930434" (UID: "a1d0ccd3-af56-498e-8f5a-90f1f1930434"). InnerVolumeSpecName "kube-api-access-njrxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.084361 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-config-data" (OuterVolumeSpecName: "config-data") pod "a1d0ccd3-af56-498e-8f5a-90f1f1930434" (UID: "a1d0ccd3-af56-498e-8f5a-90f1f1930434"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.133948 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1d0ccd3-af56-498e-8f5a-90f1f1930434" (UID: "a1d0ccd3-af56-498e-8f5a-90f1f1930434"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.143621 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.143661 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1d0ccd3-af56-498e-8f5a-90f1f1930434-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.143676 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njrxg\" (UniqueName: \"kubernetes.io/projected/a1d0ccd3-af56-498e-8f5a-90f1f1930434-kube-api-access-njrxg\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.143689 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1d0ccd3-af56-498e-8f5a-90f1f1930434-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.147839 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869b63f2-a08c-46a5-ba10-a717b8d7dae4" path="/var/lib/kubelet/pods/869b63f2-a08c-46a5-ba10-a717b8d7dae4/volumes" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.148937 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d726d5-b44e-4e5c-aca5-3d2014a5289b" path="/var/lib/kubelet/pods/d3d726d5-b44e-4e5c-aca5-3d2014a5289b/volumes" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.302168 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.454701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6","Type":"ContainerStarted","Data":"ea9e347a0bd50c590151a28197de70f89d5b97f612f696b57357dbd8f48a8faf"} Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.458300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1e9ddf6-0f93-4a84-a740-26192235747a","Type":"ContainerStarted","Data":"1d6ff1344a5603c1531fc2e6a66ff27337f606fb0cf735d85106f705d3085987"} Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.461439 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.461477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1d0ccd3-af56-498e-8f5a-90f1f1930434","Type":"ContainerDied","Data":"bd0965942c642c769ad47b540a149938c8c52f52e3d294766b609818ebbeb8d1"} Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.461513 4858 scope.go:117] "RemoveContainer" containerID="c65f30a3ea83e25ebcce3fc9db865befec928e1585a0f9b0d932d33a710254fd" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.492332 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.493814 4858 scope.go:117] "RemoveContainer" containerID="064f018074a9a27609186c6decb8d1ba1ec4d0306894ee46e1dad3caea37200d" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.512883 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.526688 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:14 crc kubenswrapper[4858]: E0127 20:32:14.527352 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-api" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.527378 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-api" Jan 27 20:32:14 crc kubenswrapper[4858]: E0127 20:32:14.527407 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-log" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.527415 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-log" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.527665 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-api" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.527686 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" containerName="nova-api-log" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.529153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.531596 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.565209 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.706098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skl88\" (UniqueName: \"kubernetes.io/projected/7aca07ad-f0b6-461f-aa60-34437855954e-kube-api-access-skl88\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.706510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.708288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aca07ad-f0b6-461f-aa60-34437855954e-logs\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.708540 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-config-data\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.811252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skl88\" (UniqueName: \"kubernetes.io/projected/7aca07ad-f0b6-461f-aa60-34437855954e-kube-api-access-skl88\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.811369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.812020 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aca07ad-f0b6-461f-aa60-34437855954e-logs\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.812372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aca07ad-f0b6-461f-aa60-34437855954e-logs\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.813887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-config-data\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.821569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-config-data\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.823200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.832598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skl88\" (UniqueName: \"kubernetes.io/projected/7aca07ad-f0b6-461f-aa60-34437855954e-kube-api-access-skl88\") pod \"nova-api-0\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " pod="openstack/nova-api-0" Jan 27 20:32:14 crc kubenswrapper[4858]: I0127 20:32:14.908591 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.032426 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.127066 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-combined-ca-bundle\") pod \"e945c5a3-9e91-4cde-923f-764261351ad1\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.127378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-config-data\") pod \"e945c5a3-9e91-4cde-923f-764261351ad1\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.127466 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-scripts\") pod \"e945c5a3-9e91-4cde-923f-764261351ad1\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.127596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4xh6\" (UniqueName: \"kubernetes.io/projected/e945c5a3-9e91-4cde-923f-764261351ad1-kube-api-access-m4xh6\") pod \"e945c5a3-9e91-4cde-923f-764261351ad1\" (UID: \"e945c5a3-9e91-4cde-923f-764261351ad1\") " Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.138518 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e945c5a3-9e91-4cde-923f-764261351ad1-kube-api-access-m4xh6" (OuterVolumeSpecName: "kube-api-access-m4xh6") pod "e945c5a3-9e91-4cde-923f-764261351ad1" (UID: "e945c5a3-9e91-4cde-923f-764261351ad1"). InnerVolumeSpecName "kube-api-access-m4xh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.145328 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-scripts" (OuterVolumeSpecName: "scripts") pod "e945c5a3-9e91-4cde-923f-764261351ad1" (UID: "e945c5a3-9e91-4cde-923f-764261351ad1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.166791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e945c5a3-9e91-4cde-923f-764261351ad1" (UID: "e945c5a3-9e91-4cde-923f-764261351ad1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.167198 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-config-data" (OuterVolumeSpecName: "config-data") pod "e945c5a3-9e91-4cde-923f-764261351ad1" (UID: "e945c5a3-9e91-4cde-923f-764261351ad1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.232428 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.232467 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.232479 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e945c5a3-9e91-4cde-923f-764261351ad1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.232488 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4xh6\" (UniqueName: \"kubernetes.io/projected/e945c5a3-9e91-4cde-923f-764261351ad1-kube-api-access-m4xh6\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.391457 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.488574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7aca07ad-f0b6-461f-aa60-34437855954e","Type":"ContainerStarted","Data":"d3a81f69deb9a8a38112cd86e63b37eaf80c19509a67e3499633846a0e1a4805"} Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.492185 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6","Type":"ContainerStarted","Data":"bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef"} Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.501681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1e9ddf6-0f93-4a84-a740-26192235747a","Type":"ContainerStarted","Data":"d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08"} Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.505757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1e9ddf6-0f93-4a84-a740-26192235747a","Type":"ContainerStarted","Data":"bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1"} Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.530773 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:32:15 crc kubenswrapper[4858]: E0127 20:32:15.531319 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e945c5a3-9e91-4cde-923f-764261351ad1" containerName="nova-cell1-conductor-db-sync" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.531338 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e945c5a3-9e91-4cde-923f-764261351ad1" containerName="nova-cell1-conductor-db-sync" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.531601 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e945c5a3-9e91-4cde-923f-764261351ad1" containerName="nova-cell1-conductor-db-sync" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.532493 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.534132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" event={"ID":"e945c5a3-9e91-4cde-923f-764261351ad1","Type":"ContainerDied","Data":"fd29c88e1de54b66d05f204523eb17a4a0808c847452e91ba4d5dc19363eb623"} Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.534162 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd29c88e1de54b66d05f204523eb17a4a0808c847452e91ba4d5dc19363eb623" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.534212 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cw7wg" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.545313 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5452832819999998 podStartE2EDuration="2.545283282s" podCreationTimestamp="2026-01-27 20:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:15.524818873 +0000 UTC m=+1480.232634609" watchObservedRunningTime="2026-01-27 20:32:15.545283282 +0000 UTC m=+1480.253098988" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.564087 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.596447 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.596421594 podStartE2EDuration="3.596421594s" podCreationTimestamp="2026-01-27 20:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:15.555793045 +0000 UTC m=+1480.263608751" watchObservedRunningTime="2026-01-27 20:32:15.596421594 +0000 UTC m=+1480.304237300" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.641940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20379f76-6255-45c6-ba02-55526177a0c0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.642315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snpm7\" (UniqueName: \"kubernetes.io/projected/20379f76-6255-45c6-ba02-55526177a0c0-kube-api-access-snpm7\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.642446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20379f76-6255-45c6-ba02-55526177a0c0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.745119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snpm7\" (UniqueName: \"kubernetes.io/projected/20379f76-6255-45c6-ba02-55526177a0c0-kube-api-access-snpm7\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.745586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20379f76-6255-45c6-ba02-55526177a0c0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.745687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20379f76-6255-45c6-ba02-55526177a0c0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.750339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20379f76-6255-45c6-ba02-55526177a0c0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.753308 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20379f76-6255-45c6-ba02-55526177a0c0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.770161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snpm7\" (UniqueName: \"kubernetes.io/projected/20379f76-6255-45c6-ba02-55526177a0c0-kube-api-access-snpm7\") pod \"nova-cell1-conductor-0\" (UID: \"20379f76-6255-45c6-ba02-55526177a0c0\") " pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:15 crc kubenswrapper[4858]: I0127 20:32:15.876274 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:16 crc kubenswrapper[4858]: I0127 20:32:16.093870 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d0ccd3-af56-498e-8f5a-90f1f1930434" path="/var/lib/kubelet/pods/a1d0ccd3-af56-498e-8f5a-90f1f1930434/volumes" Jan 27 20:32:16 crc kubenswrapper[4858]: I0127 20:32:16.390472 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 20:32:16 crc kubenswrapper[4858]: W0127 20:32:16.391712 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20379f76_6255_45c6_ba02_55526177a0c0.slice/crio-a0c889c4a8ae2955de20b8de3d210679653bba45611b487aa779c09023cd2d8d WatchSource:0}: Error finding container a0c889c4a8ae2955de20b8de3d210679653bba45611b487aa779c09023cd2d8d: Status 404 returned error can't find the container with id a0c889c4a8ae2955de20b8de3d210679653bba45611b487aa779c09023cd2d8d Jan 27 20:32:16 crc kubenswrapper[4858]: I0127 20:32:16.550003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"20379f76-6255-45c6-ba02-55526177a0c0","Type":"ContainerStarted","Data":"a0c889c4a8ae2955de20b8de3d210679653bba45611b487aa779c09023cd2d8d"} Jan 27 20:32:16 crc kubenswrapper[4858]: I0127 20:32:16.552162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7aca07ad-f0b6-461f-aa60-34437855954e","Type":"ContainerStarted","Data":"d0d3aa7813e879a3ff2f205404548265b6a43964eb1521665a3985cbe6204470"} Jan 27 20:32:16 crc kubenswrapper[4858]: I0127 20:32:16.552201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7aca07ad-f0b6-461f-aa60-34437855954e","Type":"ContainerStarted","Data":"cb7b40d3bc5970d12e8ad3b3015d9241211f7824b98f6f8c20556b0445186360"} Jan 27 20:32:16 crc kubenswrapper[4858]: I0127 20:32:16.584760 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.584739157 podStartE2EDuration="2.584739157s" podCreationTimestamp="2026-01-27 20:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:16.576422887 +0000 UTC m=+1481.284238593" watchObservedRunningTime="2026-01-27 20:32:16.584739157 +0000 UTC m=+1481.292554873" Jan 27 20:32:17 crc kubenswrapper[4858]: I0127 20:32:17.563614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"20379f76-6255-45c6-ba02-55526177a0c0","Type":"ContainerStarted","Data":"847bf799257d88a68284cd3f5f32ad30242ef35f00760a3e84abc9d19cf1415a"} Jan 27 20:32:17 crc kubenswrapper[4858]: I0127 20:32:17.590635 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.590605813 podStartE2EDuration="2.590605813s" podCreationTimestamp="2026-01-27 20:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:17.585012442 +0000 UTC m=+1482.292828158" watchObservedRunningTime="2026-01-27 20:32:17.590605813 +0000 UTC m=+1482.298421579" Jan 27 20:32:18 crc kubenswrapper[4858]: I0127 20:32:18.156725 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:32:18 crc kubenswrapper[4858]: I0127 20:32:18.156789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:32:18 crc kubenswrapper[4858]: I0127 20:32:18.574022 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:18 crc kubenswrapper[4858]: I0127 20:32:18.875657 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 20:32:19 crc kubenswrapper[4858]: I0127 20:32:19.099051 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 20:32:20 crc kubenswrapper[4858]: I0127 20:32:20.248638 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q77hx" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="registry-server" probeResult="failure" output=< Jan 27 20:32:20 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:32:20 crc kubenswrapper[4858]: > Jan 27 20:32:22 crc kubenswrapper[4858]: I0127 20:32:22.869442 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:32:22 crc kubenswrapper[4858]: I0127 20:32:22.870239 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d2c5e060-865d-405e-937d-1450a1928f49" containerName="kube-state-metrics" containerID="cri-o://1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1" gracePeriod=30 Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.160497 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.161295 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.512435 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.638527 4858 generic.go:334] "Generic (PLEG): container finished" podID="d2c5e060-865d-405e-937d-1450a1928f49" containerID="1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1" exitCode=2 Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.638928 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.638947 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2c5e060-865d-405e-937d-1450a1928f49","Type":"ContainerDied","Data":"1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1"} Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.639002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d2c5e060-865d-405e-937d-1450a1928f49","Type":"ContainerDied","Data":"4dfbefcccfc3ed40267bcd975bd9f439f245314cb156c1fc306e540f3b2356e1"} Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.639025 4858 scope.go:117] "RemoveContainer" containerID="1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.670651 4858 scope.go:117] "RemoveContainer" containerID="1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.671503 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6mtr\" (UniqueName: \"kubernetes.io/projected/d2c5e060-865d-405e-937d-1450a1928f49-kube-api-access-v6mtr\") pod \"d2c5e060-865d-405e-937d-1450a1928f49\" (UID: \"d2c5e060-865d-405e-937d-1450a1928f49\") " Jan 27 20:32:23 crc kubenswrapper[4858]: E0127 20:32:23.671994 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1\": container with ID starting with 1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1 not found: ID does not exist" containerID="1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.672036 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1"} err="failed to get container status \"1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1\": rpc error: code = NotFound desc = could not find container \"1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1\": container with ID starting with 1b57e8c0c17b466d9f4a11b955fbd66dfcf24478faf694d2530e4729033668a1 not found: ID does not exist" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.682913 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c5e060-865d-405e-937d-1450a1928f49-kube-api-access-v6mtr" (OuterVolumeSpecName: "kube-api-access-v6mtr") pod "d2c5e060-865d-405e-937d-1450a1928f49" (UID: "d2c5e060-865d-405e-937d-1450a1928f49"). InnerVolumeSpecName "kube-api-access-v6mtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.775025 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6mtr\" (UniqueName: \"kubernetes.io/projected/d2c5e060-865d-405e-937d-1450a1928f49-kube-api-access-v6mtr\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.875817 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.913737 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.978109 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:32:23 crc kubenswrapper[4858]: I0127 20:32:23.992879 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.007278 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:32:24 crc kubenswrapper[4858]: E0127 20:32:24.007889 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c5e060-865d-405e-937d-1450a1928f49" containerName="kube-state-metrics" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.007910 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c5e060-865d-405e-937d-1450a1928f49" containerName="kube-state-metrics" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.008204 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c5e060-865d-405e-937d-1450a1928f49" containerName="kube-state-metrics" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.009231 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.012012 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.012092 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.028990 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.083909 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.084068 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.084114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.084153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26jdz\" (UniqueName: \"kubernetes.io/projected/4d832896-a304-4a89-8ef2-607eea6623e5-kube-api-access-26jdz\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.092109 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c5e060-865d-405e-937d-1450a1928f49" path="/var/lib/kubelet/pods/d2c5e060-865d-405e-937d-1450a1928f49/volumes" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.174749 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.174803 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.185733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.185803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26jdz\" (UniqueName: \"kubernetes.io/projected/4d832896-a304-4a89-8ef2-607eea6623e5-kube-api-access-26jdz\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.186716 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.186825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.192076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.192111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.192437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d832896-a304-4a89-8ef2-607eea6623e5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.212583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26jdz\" (UniqueName: \"kubernetes.io/projected/4d832896-a304-4a89-8ef2-607eea6623e5-kube-api-access-26jdz\") pod \"kube-state-metrics-0\" (UID: \"4d832896-a304-4a89-8ef2-607eea6623e5\") " pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.327010 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.733786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.833911 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.909143 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:32:24 crc kubenswrapper[4858]: I0127 20:32:24.910832 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:32:25 crc kubenswrapper[4858]: I0127 20:32:25.691961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4d832896-a304-4a89-8ef2-607eea6623e5","Type":"ContainerStarted","Data":"b2443e16d9520fcb41aa8f096f2ca6c153abf3fa67269eeef7b1a35b73838be1"} Jan 27 20:32:25 crc kubenswrapper[4858]: I0127 20:32:25.692591 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 20:32:25 crc kubenswrapper[4858]: I0127 20:32:25.692631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4d832896-a304-4a89-8ef2-607eea6623e5","Type":"ContainerStarted","Data":"277d6fb6be2db670c362dbeea63c4e525ef98cfd49dca545c607e0d2019a9366"} Jan 27 20:32:25 crc kubenswrapper[4858]: I0127 20:32:25.719096 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.307195081 podStartE2EDuration="2.719066519s" podCreationTimestamp="2026-01-27 20:32:23 +0000 UTC" firstStartedPulling="2026-01-27 20:32:24.849729121 +0000 UTC m=+1489.557544827" lastFinishedPulling="2026-01-27 20:32:25.261600559 +0000 UTC m=+1489.969416265" observedRunningTime="2026-01-27 20:32:25.717222916 +0000 UTC m=+1490.425038642" watchObservedRunningTime="2026-01-27 20:32:25.719066519 +0000 UTC m=+1490.426882225" Jan 27 20:32:25 crc kubenswrapper[4858]: I0127 20:32:25.917423 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 20:32:25 crc kubenswrapper[4858]: I0127 20:32:25.990850 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:25 crc kubenswrapper[4858]: I0127 20:32:25.991127 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.049704 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.050074 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-central-agent" containerID="cri-o://5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4" gracePeriod=30 Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.050137 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="proxy-httpd" containerID="cri-o://50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad" gracePeriod=30 Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.050170 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="sg-core" containerID="cri-o://58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9" gracePeriod=30 Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.050260 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-notification-agent" containerID="cri-o://c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de" gracePeriod=30 Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.712242 4858 generic.go:334] "Generic (PLEG): container finished" podID="a210a555-31ae-408f-800b-2441335f98e5" containerID="50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad" exitCode=0 Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.712711 4858 generic.go:334] "Generic (PLEG): container finished" podID="a210a555-31ae-408f-800b-2441335f98e5" containerID="58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9" exitCode=2 Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.712725 4858 generic.go:334] "Generic (PLEG): container finished" podID="a210a555-31ae-408f-800b-2441335f98e5" containerID="5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4" exitCode=0 Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.712460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerDied","Data":"50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad"} Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.713036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerDied","Data":"58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9"} Jan 27 20:32:26 crc kubenswrapper[4858]: I0127 20:32:26.713058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerDied","Data":"5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4"} Jan 27 20:32:29 crc kubenswrapper[4858]: I0127 20:32:29.329065 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:32:29 crc kubenswrapper[4858]: I0127 20:32:29.329475 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:32:30 crc kubenswrapper[4858]: I0127 20:32:30.251952 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q77hx" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="registry-server" probeResult="failure" output=< Jan 27 20:32:30 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:32:30 crc kubenswrapper[4858]: > Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.571164 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.680567 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-config-data\") pod \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.680843 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vfj8\" (UniqueName: \"kubernetes.io/projected/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-kube-api-access-8vfj8\") pod \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.680928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-combined-ca-bundle\") pod \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\" (UID: \"27b1a5ae-7542-4c09-be54-e7b08eb9fb04\") " Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.705737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-kube-api-access-8vfj8" (OuterVolumeSpecName: "kube-api-access-8vfj8") pod "27b1a5ae-7542-4c09-be54-e7b08eb9fb04" (UID: "27b1a5ae-7542-4c09-be54-e7b08eb9fb04"). InnerVolumeSpecName "kube-api-access-8vfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.714593 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27b1a5ae-7542-4c09-be54-e7b08eb9fb04" (UID: "27b1a5ae-7542-4c09-be54-e7b08eb9fb04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.732962 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-config-data" (OuterVolumeSpecName: "config-data") pod "27b1a5ae-7542-4c09-be54-e7b08eb9fb04" (UID: "27b1a5ae-7542-4c09-be54-e7b08eb9fb04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.783529 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.783588 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vfj8\" (UniqueName: \"kubernetes.io/projected/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-kube-api-access-8vfj8\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.783600 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b1a5ae-7542-4c09-be54-e7b08eb9fb04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.795103 4858 generic.go:334] "Generic (PLEG): container finished" podID="27b1a5ae-7542-4c09-be54-e7b08eb9fb04" containerID="4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48" exitCode=137 Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.795137 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.795154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"27b1a5ae-7542-4c09-be54-e7b08eb9fb04","Type":"ContainerDied","Data":"4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48"} Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.795211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"27b1a5ae-7542-4c09-be54-e7b08eb9fb04","Type":"ContainerDied","Data":"e077d9ba94123f497ea6d8eef6d33139bbe2c5348f74a89501be314acb99dd2f"} Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.795230 4858 scope.go:117] "RemoveContainer" containerID="4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.828109 4858 scope.go:117] "RemoveContainer" containerID="4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48" Jan 27 20:32:32 crc kubenswrapper[4858]: E0127 20:32:32.828923 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48\": container with ID starting with 4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48 not found: ID does not exist" containerID="4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.828960 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48"} err="failed to get container status \"4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48\": rpc error: code = NotFound desc = could not find container \"4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48\": container with ID starting with 4ab6d7b5572f96cd5e26e7c6eaacb282ac27ed9889653fdd29d7d03e7a0c0d48 not found: ID does not exist" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.844319 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.862294 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.883476 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:32:32 crc kubenswrapper[4858]: E0127 20:32:32.884168 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b1a5ae-7542-4c09-be54-e7b08eb9fb04" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.884197 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b1a5ae-7542-4c09-be54-e7b08eb9fb04" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.884481 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b1a5ae-7542-4c09-be54-e7b08eb9fb04" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.885496 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.887364 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.887790 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.887923 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.898341 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.991506 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.992036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vccdh\" (UniqueName: \"kubernetes.io/projected/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-kube-api-access-vccdh\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.992174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.992316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:32 crc kubenswrapper[4858]: I0127 20:32:32.992457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.094419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vccdh\" (UniqueName: \"kubernetes.io/projected/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-kube-api-access-vccdh\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.094819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.094930 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.095067 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.095278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.099382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.099936 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.100144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.101432 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.112485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vccdh\" (UniqueName: \"kubernetes.io/projected/e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91-kube-api-access-vccdh\") pod \"nova-cell1-novncproxy-0\" (UID: \"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.162415 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.163127 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.169631 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.204301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:33 crc kubenswrapper[4858]: W0127 20:32:33.675703 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4fe74ae_d5b4_4a27_9bfe_e0039fa7ce91.slice/crio-8ba9e2c0624b4f769ae5725adea8e1ff0b21c4b6e8f43cc5b6afe07e5d023fe8 WatchSource:0}: Error finding container 8ba9e2c0624b4f769ae5725adea8e1ff0b21c4b6e8f43cc5b6afe07e5d023fe8: Status 404 returned error can't find the container with id 8ba9e2c0624b4f769ae5725adea8e1ff0b21c4b6e8f43cc5b6afe07e5d023fe8 Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.680178 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.814576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91","Type":"ContainerStarted","Data":"8ba9e2c0624b4f769ae5725adea8e1ff0b21c4b6e8f43cc5b6afe07e5d023fe8"} Jan 27 20:32:33 crc kubenswrapper[4858]: I0127 20:32:33.819687 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.086851 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b1a5ae-7542-4c09-be54-e7b08eb9fb04" path="/var/lib/kubelet/pods/27b1a5ae-7542-4c09-be54-e7b08eb9fb04/volumes" Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.348908 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.831034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91","Type":"ContainerStarted","Data":"f1dcdbb6a0c2fe3fa8a9b9d54eca807eb8eb6d786ab4af94225ff49b10e2d8ea"} Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.859835 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.859808485 podStartE2EDuration="2.859808485s" podCreationTimestamp="2026-01-27 20:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:34.846696717 +0000 UTC m=+1499.554512433" watchObservedRunningTime="2026-01-27 20:32:34.859808485 +0000 UTC m=+1499.567624191" Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.917917 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.919352 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.924259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:32:34 crc kubenswrapper[4858]: I0127 20:32:34.934747 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.382305 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.555423 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-config-data\") pod \"a210a555-31ae-408f-800b-2441335f98e5\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.555584 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5r4s\" (UniqueName: \"kubernetes.io/projected/a210a555-31ae-408f-800b-2441335f98e5-kube-api-access-f5r4s\") pod \"a210a555-31ae-408f-800b-2441335f98e5\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.555633 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-log-httpd\") pod \"a210a555-31ae-408f-800b-2441335f98e5\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.555701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-run-httpd\") pod \"a210a555-31ae-408f-800b-2441335f98e5\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.555726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-sg-core-conf-yaml\") pod \"a210a555-31ae-408f-800b-2441335f98e5\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.555774 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-combined-ca-bundle\") pod \"a210a555-31ae-408f-800b-2441335f98e5\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.555843 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-scripts\") pod \"a210a555-31ae-408f-800b-2441335f98e5\" (UID: \"a210a555-31ae-408f-800b-2441335f98e5\") " Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.565397 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a210a555-31ae-408f-800b-2441335f98e5" (UID: "a210a555-31ae-408f-800b-2441335f98e5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.565374 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a210a555-31ae-408f-800b-2441335f98e5" (UID: "a210a555-31ae-408f-800b-2441335f98e5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.566915 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.566941 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a210a555-31ae-408f-800b-2441335f98e5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.594507 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-scripts" (OuterVolumeSpecName: "scripts") pod "a210a555-31ae-408f-800b-2441335f98e5" (UID: "a210a555-31ae-408f-800b-2441335f98e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.595966 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a210a555-31ae-408f-800b-2441335f98e5-kube-api-access-f5r4s" (OuterVolumeSpecName: "kube-api-access-f5r4s") pod "a210a555-31ae-408f-800b-2441335f98e5" (UID: "a210a555-31ae-408f-800b-2441335f98e5"). InnerVolumeSpecName "kube-api-access-f5r4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.602600 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a210a555-31ae-408f-800b-2441335f98e5" (UID: "a210a555-31ae-408f-800b-2441335f98e5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.658481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a210a555-31ae-408f-800b-2441335f98e5" (UID: "a210a555-31ae-408f-800b-2441335f98e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.670093 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.670139 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.670152 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5r4s\" (UniqueName: \"kubernetes.io/projected/a210a555-31ae-408f-800b-2441335f98e5-kube-api-access-f5r4s\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.670166 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.708645 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-config-data" (OuterVolumeSpecName: "config-data") pod "a210a555-31ae-408f-800b-2441335f98e5" (UID: "a210a555-31ae-408f-800b-2441335f98e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.771370 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a210a555-31ae-408f-800b-2441335f98e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.843204 4858 generic.go:334] "Generic (PLEG): container finished" podID="a210a555-31ae-408f-800b-2441335f98e5" containerID="c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de" exitCode=0 Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.843311 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.843384 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerDied","Data":"c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de"} Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.843500 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a210a555-31ae-408f-800b-2441335f98e5","Type":"ContainerDied","Data":"c1eb67cfc9459112deb4e3c36b81c72df2209a67f0c4018bbd6783497e82ab86"} Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.843539 4858 scope.go:117] "RemoveContainer" containerID="50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.844807 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.867587 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.878690 4858 scope.go:117] "RemoveContainer" containerID="58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.911728 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.930582 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.932038 4858 scope.go:117] "RemoveContainer" containerID="c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.941710 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:35 crc kubenswrapper[4858]: E0127 20:32:35.942313 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-notification-agent" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942332 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-notification-agent" Jan 27 20:32:35 crc kubenswrapper[4858]: E0127 20:32:35.942360 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="proxy-httpd" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942369 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="proxy-httpd" Jan 27 20:32:35 crc kubenswrapper[4858]: E0127 20:32:35.942391 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-central-agent" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942400 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-central-agent" Jan 27 20:32:35 crc kubenswrapper[4858]: E0127 20:32:35.942426 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="sg-core" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942433 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="sg-core" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942700 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-central-agent" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942728 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="sg-core" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942741 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="proxy-httpd" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.942756 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a210a555-31ae-408f-800b-2441335f98e5" containerName="ceilometer-notification-agent" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.945068 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.949229 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.949237 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.949240 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.971449 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-scripts\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992052 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992104 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-run-httpd\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992191 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992226 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-log-httpd\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-config-data\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.992286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dkr\" (UniqueName: \"kubernetes.io/projected/6a5d256c-8005-417e-8f38-2625ef000db2-kube-api-access-v9dkr\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:35 crc kubenswrapper[4858]: I0127 20:32:35.993418 4858 scope.go:117] "RemoveContainer" containerID="5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.094614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.094770 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-run-httpd\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.094827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.094912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-log-httpd\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.094939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-config-data\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.095016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dkr\" (UniqueName: \"kubernetes.io/projected/6a5d256c-8005-417e-8f38-2625ef000db2-kube-api-access-v9dkr\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.095151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-scripts\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.095196 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.112839 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.113138 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.113341 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.124075 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a210a555-31ae-408f-800b-2441335f98e5" path="/var/lib/kubelet/pods/a210a555-31ae-408f-800b-2441335f98e5/volumes" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.126067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-log-httpd\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.129488 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.131373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-run-httpd\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.133401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.134219 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dkr\" (UniqueName: \"kubernetes.io/projected/6a5d256c-8005-417e-8f38-2625ef000db2-kube-api-access-v9dkr\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.134282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-scripts\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.149253 4858 scope.go:117] "RemoveContainer" containerID="50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad" Jan 27 20:32:36 crc kubenswrapper[4858]: E0127 20:32:36.163515 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad\": container with ID starting with 50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad not found: ID does not exist" containerID="50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.163584 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad"} err="failed to get container status \"50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad\": rpc error: code = NotFound desc = could not find container \"50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad\": container with ID starting with 50691e4144897bf6d100ba78d5fc87cd0c885c127276937ef93e40c7217cf1ad not found: ID does not exist" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.163619 4858 scope.go:117] "RemoveContainer" containerID="58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.164471 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: E0127 20:32:36.165927 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9\": container with ID starting with 58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9 not found: ID does not exist" containerID="58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.165971 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9"} err="failed to get container status \"58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9\": rpc error: code = NotFound desc = could not find container \"58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9\": container with ID starting with 58e95ed7b900ab96dc4b13b3a4e5c459fb1e2e927c329ac0b1ccda8a42a784c9 not found: ID does not exist" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.165998 4858 scope.go:117] "RemoveContainer" containerID="c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.175805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-config-data\") pod \"ceilometer-0\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: E0127 20:32:36.183974 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de\": container with ID starting with c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de not found: ID does not exist" containerID="c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.184054 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de"} err="failed to get container status \"c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de\": rpc error: code = NotFound desc = could not find container \"c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de\": container with ID starting with c664dc6529fa2e07ad56e2060df5f5430e3cea2c85f4ad85c69ed1896be124de not found: ID does not exist" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.184093 4858 scope.go:117] "RemoveContainer" containerID="5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4" Jan 27 20:32:36 crc kubenswrapper[4858]: E0127 20:32:36.184819 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4\": container with ID starting with 5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4 not found: ID does not exist" containerID="5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.184850 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4"} err="failed to get container status \"5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4\": rpc error: code = NotFound desc = could not find container \"5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4\": container with ID starting with 5a1ba7171b1c6075b4f77b4b1964573fe4e2878410ad4f9153877c185f62b9b4 not found: ID does not exist" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.194103 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fb8cf77bc-8xnvj"] Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.196473 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.206466 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb8cf77bc-8xnvj"] Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.314074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.314160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-swift-storage-0\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.314249 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.314358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-config\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.314399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvhmm\" (UniqueName: \"kubernetes.io/projected/90f1e30f-2381-470e-9465-4d30253d91c7-kube-api-access-cvhmm\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.314462 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-svc\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.413804 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.416613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-config\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.416676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvhmm\" (UniqueName: \"kubernetes.io/projected/90f1e30f-2381-470e-9465-4d30253d91c7-kube-api-access-cvhmm\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.416722 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-svc\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.416766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.416796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-swift-storage-0\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.416848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.417685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-config\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.418016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-nb\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.418170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-swift-storage-0\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.418641 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-sb\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.419035 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-svc\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.448077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvhmm\" (UniqueName: \"kubernetes.io/projected/90f1e30f-2381-470e-9465-4d30253d91c7-kube-api-access-cvhmm\") pod \"dnsmasq-dns-5fb8cf77bc-8xnvj\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.580418 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:36 crc kubenswrapper[4858]: I0127 20:32:36.984701 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:36 crc kubenswrapper[4858]: W0127 20:32:36.986216 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a5d256c_8005_417e_8f38_2625ef000db2.slice/crio-b85e574be2e8f3f247a86e2a600461d0602f1897bd9fd8c0a1396deb04feee63 WatchSource:0}: Error finding container b85e574be2e8f3f247a86e2a600461d0602f1897bd9fd8c0a1396deb04feee63: Status 404 returned error can't find the container with id b85e574be2e8f3f247a86e2a600461d0602f1897bd9fd8c0a1396deb04feee63 Jan 27 20:32:37 crc kubenswrapper[4858]: I0127 20:32:37.150822 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb8cf77bc-8xnvj"] Jan 27 20:32:37 crc kubenswrapper[4858]: I0127 20:32:37.873105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerStarted","Data":"92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42"} Jan 27 20:32:37 crc kubenswrapper[4858]: I0127 20:32:37.873800 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerStarted","Data":"9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa"} Jan 27 20:32:37 crc kubenswrapper[4858]: I0127 20:32:37.873817 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerStarted","Data":"b85e574be2e8f3f247a86e2a600461d0602f1897bd9fd8c0a1396deb04feee63"} Jan 27 20:32:37 crc kubenswrapper[4858]: I0127 20:32:37.875588 4858 generic.go:334] "Generic (PLEG): container finished" podID="90f1e30f-2381-470e-9465-4d30253d91c7" containerID="2dbf29fdc6d8ee3347bba154276bcd4fad75082632ee5961381463249d72166b" exitCode=0 Jan 27 20:32:37 crc kubenswrapper[4858]: I0127 20:32:37.875670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" event={"ID":"90f1e30f-2381-470e-9465-4d30253d91c7","Type":"ContainerDied","Data":"2dbf29fdc6d8ee3347bba154276bcd4fad75082632ee5961381463249d72166b"} Jan 27 20:32:37 crc kubenswrapper[4858]: I0127 20:32:37.875734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" event={"ID":"90f1e30f-2381-470e-9465-4d30253d91c7","Type":"ContainerStarted","Data":"b3f3b5a7e94bbff8f6b44da4e2c85d8568019bda57722d8f5e8a6763dad716b7"} Jan 27 20:32:38 crc kubenswrapper[4858]: I0127 20:32:38.205271 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:38 crc kubenswrapper[4858]: I0127 20:32:38.828886 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:38 crc kubenswrapper[4858]: I0127 20:32:38.887331 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerStarted","Data":"c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0"} Jan 27 20:32:38 crc kubenswrapper[4858]: I0127 20:32:38.889122 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-log" containerID="cri-o://cb7b40d3bc5970d12e8ad3b3015d9241211f7824b98f6f8c20556b0445186360" gracePeriod=30 Jan 27 20:32:38 crc kubenswrapper[4858]: I0127 20:32:38.889409 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-api" containerID="cri-o://d0d3aa7813e879a3ff2f205404548265b6a43964eb1521665a3985cbe6204470" gracePeriod=30 Jan 27 20:32:38 crc kubenswrapper[4858]: I0127 20:32:38.889511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" event={"ID":"90f1e30f-2381-470e-9465-4d30253d91c7","Type":"ContainerStarted","Data":"fd780543b236bc836f5e15fb636f7cbca6813533c8ea7132fd7eabcf6dd0bf2c"} Jan 27 20:32:38 crc kubenswrapper[4858]: I0127 20:32:38.924501 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" podStartSLOduration=2.92447973 podStartE2EDuration="2.92447973s" podCreationTimestamp="2026-01-27 20:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:38.911856427 +0000 UTC m=+1503.619672133" watchObservedRunningTime="2026-01-27 20:32:38.92447973 +0000 UTC m=+1503.632295436" Jan 27 20:32:39 crc kubenswrapper[4858]: I0127 20:32:39.135746 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:39 crc kubenswrapper[4858]: I0127 20:32:39.332396 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:32:39 crc kubenswrapper[4858]: I0127 20:32:39.399433 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:32:39 crc kubenswrapper[4858]: I0127 20:32:39.590174 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q77hx"] Jan 27 20:32:39 crc kubenswrapper[4858]: I0127 20:32:39.906508 4858 generic.go:334] "Generic (PLEG): container finished" podID="7aca07ad-f0b6-461f-aa60-34437855954e" containerID="cb7b40d3bc5970d12e8ad3b3015d9241211f7824b98f6f8c20556b0445186360" exitCode=143 Jan 27 20:32:39 crc kubenswrapper[4858]: I0127 20:32:39.907411 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7aca07ad-f0b6-461f-aa60-34437855954e","Type":"ContainerDied","Data":"cb7b40d3bc5970d12e8ad3b3015d9241211f7824b98f6f8c20556b0445186360"} Jan 27 20:32:39 crc kubenswrapper[4858]: I0127 20:32:39.907449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:40 crc kubenswrapper[4858]: I0127 20:32:40.932613 4858 generic.go:334] "Generic (PLEG): container finished" podID="7aca07ad-f0b6-461f-aa60-34437855954e" containerID="d0d3aa7813e879a3ff2f205404548265b6a43964eb1521665a3985cbe6204470" exitCode=0 Jan 27 20:32:40 crc kubenswrapper[4858]: I0127 20:32:40.933057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7aca07ad-f0b6-461f-aa60-34437855954e","Type":"ContainerDied","Data":"d0d3aa7813e879a3ff2f205404548265b6a43964eb1521665a3985cbe6204470"} Jan 27 20:32:40 crc kubenswrapper[4858]: I0127 20:32:40.933930 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q77hx" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="registry-server" containerID="cri-o://2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001" gracePeriod=2 Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.401407 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.466862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skl88\" (UniqueName: \"kubernetes.io/projected/7aca07ad-f0b6-461f-aa60-34437855954e-kube-api-access-skl88\") pod \"7aca07ad-f0b6-461f-aa60-34437855954e\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.468121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-combined-ca-bundle\") pod \"7aca07ad-f0b6-461f-aa60-34437855954e\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.468267 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aca07ad-f0b6-461f-aa60-34437855954e-logs\") pod \"7aca07ad-f0b6-461f-aa60-34437855954e\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.469448 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-config-data\") pod \"7aca07ad-f0b6-461f-aa60-34437855954e\" (UID: \"7aca07ad-f0b6-461f-aa60-34437855954e\") " Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.470070 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aca07ad-f0b6-461f-aa60-34437855954e-logs" (OuterVolumeSpecName: "logs") pod "7aca07ad-f0b6-461f-aa60-34437855954e" (UID: "7aca07ad-f0b6-461f-aa60-34437855954e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.470382 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aca07ad-f0b6-461f-aa60-34437855954e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.476084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aca07ad-f0b6-461f-aa60-34437855954e-kube-api-access-skl88" (OuterVolumeSpecName: "kube-api-access-skl88") pod "7aca07ad-f0b6-461f-aa60-34437855954e" (UID: "7aca07ad-f0b6-461f-aa60-34437855954e"). InnerVolumeSpecName "kube-api-access-skl88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.534639 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.544398 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aca07ad-f0b6-461f-aa60-34437855954e" (UID: "7aca07ad-f0b6-461f-aa60-34437855954e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.544462 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-config-data" (OuterVolumeSpecName: "config-data") pod "7aca07ad-f0b6-461f-aa60-34437855954e" (UID: "7aca07ad-f0b6-461f-aa60-34437855954e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.571836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrghf\" (UniqueName: \"kubernetes.io/projected/63e193df-42fe-456b-b277-bf843975384c-kube-api-access-xrghf\") pod \"63e193df-42fe-456b-b277-bf843975384c\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.571989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-utilities\") pod \"63e193df-42fe-456b-b277-bf843975384c\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.572050 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-catalog-content\") pod \"63e193df-42fe-456b-b277-bf843975384c\" (UID: \"63e193df-42fe-456b-b277-bf843975384c\") " Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.572616 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.572639 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skl88\" (UniqueName: \"kubernetes.io/projected/7aca07ad-f0b6-461f-aa60-34437855954e-kube-api-access-skl88\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.572652 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aca07ad-f0b6-461f-aa60-34437855954e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.575299 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-utilities" (OuterVolumeSpecName: "utilities") pod "63e193df-42fe-456b-b277-bf843975384c" (UID: "63e193df-42fe-456b-b277-bf843975384c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.584171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e193df-42fe-456b-b277-bf843975384c-kube-api-access-xrghf" (OuterVolumeSpecName: "kube-api-access-xrghf") pod "63e193df-42fe-456b-b277-bf843975384c" (UID: "63e193df-42fe-456b-b277-bf843975384c"). InnerVolumeSpecName "kube-api-access-xrghf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.674526 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrghf\" (UniqueName: \"kubernetes.io/projected/63e193df-42fe-456b-b277-bf843975384c-kube-api-access-xrghf\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.674588 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.736510 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63e193df-42fe-456b-b277-bf843975384c" (UID: "63e193df-42fe-456b-b277-bf843975384c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.776584 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63e193df-42fe-456b-b277-bf843975384c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.946296 4858 generic.go:334] "Generic (PLEG): container finished" podID="63e193df-42fe-456b-b277-bf843975384c" containerID="2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001" exitCode=0 Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.946373 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77hx" event={"ID":"63e193df-42fe-456b-b277-bf843975384c","Type":"ContainerDied","Data":"2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001"} Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.946411 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q77hx" event={"ID":"63e193df-42fe-456b-b277-bf843975384c","Type":"ContainerDied","Data":"e7c5e74fd28dc5976a9adb1835bc0485eca0b12d86f309ed7b54aeb28460d614"} Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.946433 4858 scope.go:117] "RemoveContainer" containerID="2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.946708 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q77hx" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.954325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerStarted","Data":"f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b"} Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.954502 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-central-agent" containerID="cri-o://9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa" gracePeriod=30 Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.954880 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.955678 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="proxy-httpd" containerID="cri-o://f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b" gracePeriod=30 Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.955853 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-notification-agent" containerID="cri-o://92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42" gracePeriod=30 Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.955920 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="sg-core" containerID="cri-o://c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0" gracePeriod=30 Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.964319 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7aca07ad-f0b6-461f-aa60-34437855954e","Type":"ContainerDied","Data":"d3a81f69deb9a8a38112cd86e63b37eaf80c19509a67e3499633846a0e1a4805"} Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.964444 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:41 crc kubenswrapper[4858]: I0127 20:32:41.988332 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.812101326 podStartE2EDuration="6.988306962s" podCreationTimestamp="2026-01-27 20:32:35 +0000 UTC" firstStartedPulling="2026-01-27 20:32:37.000796281 +0000 UTC m=+1501.708611987" lastFinishedPulling="2026-01-27 20:32:41.177001917 +0000 UTC m=+1505.884817623" observedRunningTime="2026-01-27 20:32:41.979033345 +0000 UTC m=+1506.686849071" watchObservedRunningTime="2026-01-27 20:32:41.988306962 +0000 UTC m=+1506.696122658" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.024207 4858 scope.go:117] "RemoveContainer" containerID="c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.043366 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q77hx"] Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.057011 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q77hx"] Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.087223 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e193df-42fe-456b-b277-bf843975384c" path="/var/lib/kubelet/pods/63e193df-42fe-456b-b277-bf843975384c/volumes" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.093431 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.101780 4858 scope.go:117] "RemoveContainer" containerID="f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.123191 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.151409 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.151993 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-api" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152009 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-api" Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.152022 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-log" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152029 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-log" Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.152055 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="extract-utilities" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152062 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="extract-utilities" Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.152074 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="registry-server" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152080 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="registry-server" Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.152101 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="extract-content" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152108 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="extract-content" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152312 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e193df-42fe-456b-b277-bf843975384c" containerName="registry-server" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152325 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-api" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.152348 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" containerName="nova-api-log" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.153490 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.153609 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.156613 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.157385 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.158617 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.192425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.192489 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.192653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-config-data\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.192718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.192740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqglf\" (UniqueName: \"kubernetes.io/projected/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-kube-api-access-wqglf\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.192774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-logs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.240136 4858 scope.go:117] "RemoveContainer" containerID="2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001" Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.244532 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001\": container with ID starting with 2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001 not found: ID does not exist" containerID="2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.244600 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001"} err="failed to get container status \"2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001\": rpc error: code = NotFound desc = could not find container \"2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001\": container with ID starting with 2cdd9aa7263ff834d57ee32bcebb944412aff4974a0c572c868ab01b82716001 not found: ID does not exist" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.244631 4858 scope.go:117] "RemoveContainer" containerID="c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d" Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.245153 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d\": container with ID starting with c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d not found: ID does not exist" containerID="c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.245222 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d"} err="failed to get container status \"c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d\": rpc error: code = NotFound desc = could not find container \"c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d\": container with ID starting with c6fd62ba5a8afd384c6c81bf8247f0bcef6cea6929d3a50c4365fa6f60a8af0d not found: ID does not exist" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.245262 4858 scope.go:117] "RemoveContainer" containerID="f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4" Jan 27 20:32:42 crc kubenswrapper[4858]: E0127 20:32:42.245653 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4\": container with ID starting with f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4 not found: ID does not exist" containerID="f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.245699 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4"} err="failed to get container status \"f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4\": rpc error: code = NotFound desc = could not find container \"f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4\": container with ID starting with f01ff22f79bd0e5d35b62c21944a3b566e73fa2a7af2555f8e8ee2f04077c1d4 not found: ID does not exist" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.245734 4858 scope.go:117] "RemoveContainer" containerID="d0d3aa7813e879a3ff2f205404548265b6a43964eb1521665a3985cbe6204470" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.278970 4858 scope.go:117] "RemoveContainer" containerID="cb7b40d3bc5970d12e8ad3b3015d9241211f7824b98f6f8c20556b0445186360" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.295035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqglf\" (UniqueName: \"kubernetes.io/projected/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-kube-api-access-wqglf\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.295115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-logs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.295348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.295389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.295816 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-config-data\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.295947 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.295988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-logs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.301331 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-config-data\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.304416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.304979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.305383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-public-tls-certs\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.314328 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqglf\" (UniqueName: \"kubernetes.io/projected/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-kube-api-access-wqglf\") pod \"nova-api-0\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.559701 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.983384 4858 generic.go:334] "Generic (PLEG): container finished" podID="6a5d256c-8005-417e-8f38-2625ef000db2" containerID="f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b" exitCode=0 Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.983875 4858 generic.go:334] "Generic (PLEG): container finished" podID="6a5d256c-8005-417e-8f38-2625ef000db2" containerID="c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0" exitCode=2 Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.983885 4858 generic.go:334] "Generic (PLEG): container finished" podID="6a5d256c-8005-417e-8f38-2625ef000db2" containerID="92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42" exitCode=0 Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.983958 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerDied","Data":"f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b"} Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.984018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerDied","Data":"c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0"} Jan 27 20:32:42 crc kubenswrapper[4858]: I0127 20:32:42.984030 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerDied","Data":"92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42"} Jan 27 20:32:43 crc kubenswrapper[4858]: I0127 20:32:43.027210 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:43 crc kubenswrapper[4858]: W0127 20:32:43.030923 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0814fb48_7f7d_4ebd_a3f1_f7017387a3e1.slice/crio-a3a84e2367c9623532bdc56797b38268fd56ea826ad30096c224eb57f3ac89b3 WatchSource:0}: Error finding container a3a84e2367c9623532bdc56797b38268fd56ea826ad30096c224eb57f3ac89b3: Status 404 returned error can't find the container with id a3a84e2367c9623532bdc56797b38268fd56ea826ad30096c224eb57f3ac89b3 Jan 27 20:32:43 crc kubenswrapper[4858]: I0127 20:32:43.205050 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:43 crc kubenswrapper[4858]: I0127 20:32:43.229112 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.018663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1","Type":"ContainerStarted","Data":"68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226"} Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.018972 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1","Type":"ContainerStarted","Data":"558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01"} Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.018984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1","Type":"ContainerStarted","Data":"a3a84e2367c9623532bdc56797b38268fd56ea826ad30096c224eb57f3ac89b3"} Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.051514 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.054314 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.054276819 podStartE2EDuration="2.054276819s" podCreationTimestamp="2026-01-27 20:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:44.035648862 +0000 UTC m=+1508.743464598" watchObservedRunningTime="2026-01-27 20:32:44.054276819 +0000 UTC m=+1508.762092575" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.090946 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aca07ad-f0b6-461f-aa60-34437855954e" path="/var/lib/kubelet/pods/7aca07ad-f0b6-461f-aa60-34437855954e/volumes" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.249670 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-q27s5"] Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.251203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.254278 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.254328 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.258899 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27s5"] Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.352936 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-scripts\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.353012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.353043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tjp\" (UniqueName: \"kubernetes.io/projected/ee8824c1-03d3-4583-808c-c308867369e5-kube-api-access-f5tjp\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.353065 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-config-data\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.454583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-scripts\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.454960 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.454989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tjp\" (UniqueName: \"kubernetes.io/projected/ee8824c1-03d3-4583-808c-c308867369e5-kube-api-access-f5tjp\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.455015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-config-data\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.461312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.462299 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-config-data\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.464240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-scripts\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.474835 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tjp\" (UniqueName: \"kubernetes.io/projected/ee8824c1-03d3-4583-808c-c308867369e5-kube-api-access-f5tjp\") pod \"nova-cell1-cell-mapping-q27s5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:44 crc kubenswrapper[4858]: I0127 20:32:44.580953 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:45 crc kubenswrapper[4858]: I0127 20:32:45.077096 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27s5"] Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.637396 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690324 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-sg-core-conf-yaml\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690488 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-combined-ca-bundle\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-run-httpd\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-config-data\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-scripts\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690794 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-ceilometer-tls-certs\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690899 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9dkr\" (UniqueName: \"kubernetes.io/projected/6a5d256c-8005-417e-8f38-2625ef000db2-kube-api-access-v9dkr\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.690988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-log-httpd\") pod \"6a5d256c-8005-417e-8f38-2625ef000db2\" (UID: \"6a5d256c-8005-417e-8f38-2625ef000db2\") " Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.691658 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.691774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.692352 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.692370 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a5d256c-8005-417e-8f38-2625ef000db2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.698708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-scripts" (OuterVolumeSpecName: "scripts") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.705752 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5d256c-8005-417e-8f38-2625ef000db2-kube-api-access-v9dkr" (OuterVolumeSpecName: "kube-api-access-v9dkr") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "kube-api-access-v9dkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.754640 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.802911 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.803187 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.803251 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.803269 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9dkr\" (UniqueName: \"kubernetes.io/projected/6a5d256c-8005-417e-8f38-2625ef000db2-kube-api-access-v9dkr\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.803282 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.820597 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.855569 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-config-data" (OuterVolumeSpecName: "config-data") pod "6a5d256c-8005-417e-8f38-2625ef000db2" (UID: "6a5d256c-8005-417e-8f38-2625ef000db2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.913189 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:45.913222 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a5d256c-8005-417e-8f38-2625ef000db2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.046926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27s5" event={"ID":"ee8824c1-03d3-4583-808c-c308867369e5","Type":"ContainerStarted","Data":"e733cf2c2aa94c74cb74a00c484e07cd64a85c7443ea5a453ef88b73df4437e1"} Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.046984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27s5" event={"ID":"ee8824c1-03d3-4583-808c-c308867369e5","Type":"ContainerStarted","Data":"67dc15c6d763016009eca05d9cafcc6c16efb0478f498caaa9fd94f77a36a4ff"} Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.052526 4858 generic.go:334] "Generic (PLEG): container finished" podID="6a5d256c-8005-417e-8f38-2625ef000db2" containerID="9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa" exitCode=0 Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.052571 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerDied","Data":"9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa"} Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.052619 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a5d256c-8005-417e-8f38-2625ef000db2","Type":"ContainerDied","Data":"b85e574be2e8f3f247a86e2a600461d0602f1897bd9fd8c0a1396deb04feee63"} Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.052642 4858 scope.go:117] "RemoveContainer" containerID="f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.052674 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.067373 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-q27s5" podStartSLOduration=2.067357191 podStartE2EDuration="2.067357191s" podCreationTimestamp="2026-01-27 20:32:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:32:46.065252571 +0000 UTC m=+1510.773068277" watchObservedRunningTime="2026-01-27 20:32:46.067357191 +0000 UTC m=+1510.775172897" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.078500 4858 scope.go:117] "RemoveContainer" containerID="c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.225758 4858 scope.go:117] "RemoveContainer" containerID="92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.244643 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.259770 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.306621 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.307113 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="sg-core" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307128 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="sg-core" Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.307158 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-notification-agent" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307165 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-notification-agent" Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.307193 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-central-agent" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307200 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-central-agent" Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.307220 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="proxy-httpd" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307226 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="proxy-httpd" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307401 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-central-agent" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307415 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="ceilometer-notification-agent" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307432 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="sg-core" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.307441 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" containerName="proxy-httpd" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.309322 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.319421 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.319715 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.319864 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.326323 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.368798 4858 scope.go:117] "RemoveContainer" containerID="9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.434394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-log-httpd\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.435711 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.435760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-scripts\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.436173 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.436452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-run-httpd\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.436521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crtds\" (UniqueName: \"kubernetes.io/projected/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-kube-api-access-crtds\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.436645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-config-data\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.436700 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.437967 4858 scope.go:117] "RemoveContainer" containerID="f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b" Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.439287 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b\": container with ID starting with f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b not found: ID does not exist" containerID="f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.439319 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b"} err="failed to get container status \"f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b\": rpc error: code = NotFound desc = could not find container \"f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b\": container with ID starting with f9cae46b912d1eeb93e6a9c3f0e230b57715c5a4df1e956dcd2a54b2ddc6f34b not found: ID does not exist" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.439341 4858 scope.go:117] "RemoveContainer" containerID="c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0" Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.439717 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0\": container with ID starting with c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0 not found: ID does not exist" containerID="c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.439737 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0"} err="failed to get container status \"c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0\": rpc error: code = NotFound desc = could not find container \"c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0\": container with ID starting with c341e512010937a8bdc949d18a0a927252764c8131d1e0a759c11bd8713483d0 not found: ID does not exist" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.439749 4858 scope.go:117] "RemoveContainer" containerID="92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42" Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.443042 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42\": container with ID starting with 92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42 not found: ID does not exist" containerID="92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.443123 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42"} err="failed to get container status \"92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42\": rpc error: code = NotFound desc = could not find container \"92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42\": container with ID starting with 92a7fb4f076d13b5c49a6d9281f0c027ec638fce0c238d6b64c5cddb15dd9a42 not found: ID does not exist" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.443162 4858 scope.go:117] "RemoveContainer" containerID="9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa" Jan 27 20:32:46 crc kubenswrapper[4858]: E0127 20:32:46.443566 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa\": container with ID starting with 9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa not found: ID does not exist" containerID="9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.443599 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa"} err="failed to get container status \"9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa\": rpc error: code = NotFound desc = could not find container \"9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa\": container with ID starting with 9dfadf28f4294e2602278c3fbd4cf839f61b4b879b7322ac1ad87912c0582efa not found: ID does not exist" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-log-httpd\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539324 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-scripts\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-run-httpd\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crtds\" (UniqueName: \"kubernetes.io/projected/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-kube-api-access-crtds\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.539600 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-config-data\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.540542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-log-httpd\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.540833 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-run-httpd\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.545565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.546070 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-config-data\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.555339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.555815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-scripts\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.564614 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.565384 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crtds\" (UniqueName: \"kubernetes.io/projected/4e9e36b1-d81b-4be3-a0d7-ee413bdece24-kube-api-access-crtds\") pod \"ceilometer-0\" (UID: \"4e9e36b1-d81b-4be3-a0d7-ee413bdece24\") " pod="openstack/ceilometer-0" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.582689 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.661122 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c97cfd99-mrcvv"] Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.661489 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" podUID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerName="dnsmasq-dns" containerID="cri-o://aa80df46c0396e332d14f09514bf939e5883b938c3a07f2ff030eb229f93fa94" gracePeriod=10 Jan 27 20:32:46 crc kubenswrapper[4858]: I0127 20:32:46.710247 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.090501 4858 generic.go:334] "Generic (PLEG): container finished" podID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerID="aa80df46c0396e332d14f09514bf939e5883b938c3a07f2ff030eb229f93fa94" exitCode=0 Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.090639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" event={"ID":"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06","Type":"ContainerDied","Data":"aa80df46c0396e332d14f09514bf939e5883b938c3a07f2ff030eb229f93fa94"} Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.300796 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.360158 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-config\") pod \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.360333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-swift-storage-0\") pod \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.360384 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-svc\") pod \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.360409 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-sb\") pod \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.360639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-nb\") pod \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.360677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdj4\" (UniqueName: \"kubernetes.io/projected/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-kube-api-access-xwdj4\") pod \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\" (UID: \"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06\") " Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.380979 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-kube-api-access-xwdj4" (OuterVolumeSpecName: "kube-api-access-xwdj4") pod "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" (UID: "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06"). InnerVolumeSpecName "kube-api-access-xwdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.433242 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.441031 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" (UID: "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.450617 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" (UID: "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:47 crc kubenswrapper[4858]: W0127 20:32:47.452228 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e9e36b1_d81b_4be3_a0d7_ee413bdece24.slice/crio-18e68ee2ad8840cd59f6a0f610a2d27f81bb649e48b34585a6f34b509e20074f WatchSource:0}: Error finding container 18e68ee2ad8840cd59f6a0f610a2d27f81bb649e48b34585a6f34b509e20074f: Status 404 returned error can't find the container with id 18e68ee2ad8840cd59f6a0f610a2d27f81bb649e48b34585a6f34b509e20074f Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.463727 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.463760 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.463773 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwdj4\" (UniqueName: \"kubernetes.io/projected/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-kube-api-access-xwdj4\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.464052 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" (UID: "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.494529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" (UID: "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.509170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-config" (OuterVolumeSpecName: "config") pod "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" (UID: "1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.566495 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.566972 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:47 crc kubenswrapper[4858]: I0127 20:32:47.566984 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.107155 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a5d256c-8005-417e-8f38-2625ef000db2" path="/var/lib/kubelet/pods/6a5d256c-8005-417e-8f38-2625ef000db2/volumes" Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.110879 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.112846 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c97cfd99-mrcvv" event={"ID":"1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06","Type":"ContainerDied","Data":"d4f172c8684f496a471c80bc313be4d06f959176dadaf3bd3a1c2ff268a10b40"} Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.112961 4858 scope.go:117] "RemoveContainer" containerID="aa80df46c0396e332d14f09514bf939e5883b938c3a07f2ff030eb229f93fa94" Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.123863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9e36b1-d81b-4be3-a0d7-ee413bdece24","Type":"ContainerStarted","Data":"839888058d5b1708d56e1532c4c96f5846699f8bc7a803ac3dc410ebdd063a1c"} Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.123905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9e36b1-d81b-4be3-a0d7-ee413bdece24","Type":"ContainerStarted","Data":"c94bc0262f548746dae82e9f15574b11482ff28398fff55b269dda4093c0b418"} Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.123918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9e36b1-d81b-4be3-a0d7-ee413bdece24","Type":"ContainerStarted","Data":"18e68ee2ad8840cd59f6a0f610a2d27f81bb649e48b34585a6f34b509e20074f"} Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.154507 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c97cfd99-mrcvv"] Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.164252 4858 scope.go:117] "RemoveContainer" containerID="bb8ccfac3658f5615cffd177ef35ab48029e6d2fc7b6ae4405485fd867dfc958" Jan 27 20:32:48 crc kubenswrapper[4858]: I0127 20:32:48.177207 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c97cfd99-mrcvv"] Jan 27 20:32:49 crc kubenswrapper[4858]: I0127 20:32:49.137724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9e36b1-d81b-4be3-a0d7-ee413bdece24","Type":"ContainerStarted","Data":"66bdc77e3d75b22954d32d82a3506c6dc2b2132992d10aeec3816588fc936270"} Jan 27 20:32:50 crc kubenswrapper[4858]: I0127 20:32:50.095525 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" path="/var/lib/kubelet/pods/1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06/volumes" Jan 27 20:32:51 crc kubenswrapper[4858]: I0127 20:32:51.173387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9e36b1-d81b-4be3-a0d7-ee413bdece24","Type":"ContainerStarted","Data":"ba9ba9ddbfb0c3aa3f74e4ace521a1eba9eb00935a00c96a859abf3e88478aaf"} Jan 27 20:32:51 crc kubenswrapper[4858]: I0127 20:32:51.173771 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 20:32:51 crc kubenswrapper[4858]: I0127 20:32:51.212112 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.652998619 podStartE2EDuration="5.2120892s" podCreationTimestamp="2026-01-27 20:32:46 +0000 UTC" firstStartedPulling="2026-01-27 20:32:47.455866595 +0000 UTC m=+1512.163682301" lastFinishedPulling="2026-01-27 20:32:50.014957176 +0000 UTC m=+1514.722772882" observedRunningTime="2026-01-27 20:32:51.19818848 +0000 UTC m=+1515.906004196" watchObservedRunningTime="2026-01-27 20:32:51.2120892 +0000 UTC m=+1515.919904906" Jan 27 20:32:52 crc kubenswrapper[4858]: I0127 20:32:52.560017 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:32:52 crc kubenswrapper[4858]: I0127 20:32:52.560322 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:32:53 crc kubenswrapper[4858]: I0127 20:32:53.226973 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee8824c1-03d3-4583-808c-c308867369e5" containerID="e733cf2c2aa94c74cb74a00c484e07cd64a85c7443ea5a453ef88b73df4437e1" exitCode=0 Jan 27 20:32:53 crc kubenswrapper[4858]: I0127 20:32:53.227076 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27s5" event={"ID":"ee8824c1-03d3-4583-808c-c308867369e5","Type":"ContainerDied","Data":"e733cf2c2aa94c74cb74a00c484e07cd64a85c7443ea5a453ef88b73df4437e1"} Jan 27 20:32:53 crc kubenswrapper[4858]: I0127 20:32:53.575836 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:53 crc kubenswrapper[4858]: I0127 20:32:53.575861 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.689985 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.755566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-scripts\") pod \"ee8824c1-03d3-4583-808c-c308867369e5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.755755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5tjp\" (UniqueName: \"kubernetes.io/projected/ee8824c1-03d3-4583-808c-c308867369e5-kube-api-access-f5tjp\") pod \"ee8824c1-03d3-4583-808c-c308867369e5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.755868 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-config-data\") pod \"ee8824c1-03d3-4583-808c-c308867369e5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.755989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-combined-ca-bundle\") pod \"ee8824c1-03d3-4583-808c-c308867369e5\" (UID: \"ee8824c1-03d3-4583-808c-c308867369e5\") " Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.765145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-scripts" (OuterVolumeSpecName: "scripts") pod "ee8824c1-03d3-4583-808c-c308867369e5" (UID: "ee8824c1-03d3-4583-808c-c308867369e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.777881 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8824c1-03d3-4583-808c-c308867369e5-kube-api-access-f5tjp" (OuterVolumeSpecName: "kube-api-access-f5tjp") pod "ee8824c1-03d3-4583-808c-c308867369e5" (UID: "ee8824c1-03d3-4583-808c-c308867369e5"). InnerVolumeSpecName "kube-api-access-f5tjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.808511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-config-data" (OuterVolumeSpecName: "config-data") pod "ee8824c1-03d3-4583-808c-c308867369e5" (UID: "ee8824c1-03d3-4583-808c-c308867369e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.820273 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee8824c1-03d3-4583-808c-c308867369e5" (UID: "ee8824c1-03d3-4583-808c-c308867369e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.859753 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.859782 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.859793 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5tjp\" (UniqueName: \"kubernetes.io/projected/ee8824c1-03d3-4583-808c-c308867369e5-kube-api-access-f5tjp\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:54 crc kubenswrapper[4858]: I0127 20:32:54.859803 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee8824c1-03d3-4583-808c-c308867369e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.251359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q27s5" event={"ID":"ee8824c1-03d3-4583-808c-c308867369e5","Type":"ContainerDied","Data":"67dc15c6d763016009eca05d9cafcc6c16efb0478f498caaa9fd94f77a36a4ff"} Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.251405 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q27s5" Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.251413 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67dc15c6d763016009eca05d9cafcc6c16efb0478f498caaa9fd94f77a36a4ff" Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.467610 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.468120 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-log" containerID="cri-o://558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01" gracePeriod=30 Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.468230 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-api" containerID="cri-o://68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226" gracePeriod=30 Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.488815 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.489101 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" containerName="nova-scheduler-scheduler" containerID="cri-o://bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" gracePeriod=30 Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.550224 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.551292 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-log" containerID="cri-o://bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1" gracePeriod=30 Jan 27 20:32:55 crc kubenswrapper[4858]: I0127 20:32:55.551446 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-metadata" containerID="cri-o://d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08" gracePeriod=30 Jan 27 20:32:56 crc kubenswrapper[4858]: I0127 20:32:56.264654 4858 generic.go:334] "Generic (PLEG): container finished" podID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerID="bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1" exitCode=143 Jan 27 20:32:56 crc kubenswrapper[4858]: I0127 20:32:56.264737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1e9ddf6-0f93-4a84-a740-26192235747a","Type":"ContainerDied","Data":"bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1"} Jan 27 20:32:56 crc kubenswrapper[4858]: I0127 20:32:56.267677 4858 generic.go:334] "Generic (PLEG): container finished" podID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerID="558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01" exitCode=143 Jan 27 20:32:56 crc kubenswrapper[4858]: I0127 20:32:56.267734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1","Type":"ContainerDied","Data":"558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01"} Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.240216 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.300695 4858 generic.go:334] "Generic (PLEG): container finished" podID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerID="d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08" exitCode=0 Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.300752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1e9ddf6-0f93-4a84-a740-26192235747a","Type":"ContainerDied","Data":"d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08"} Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.300795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e1e9ddf6-0f93-4a84-a740-26192235747a","Type":"ContainerDied","Data":"1d6ff1344a5603c1531fc2e6a66ff27337f606fb0cf735d85106f705d3085987"} Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.300821 4858 scope.go:117] "RemoveContainer" containerID="d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.301209 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.317470 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5qpf\" (UniqueName: \"kubernetes.io/projected/e1e9ddf6-0f93-4a84-a740-26192235747a-kube-api-access-w5qpf\") pod \"e1e9ddf6-0f93-4a84-a740-26192235747a\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.317608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-combined-ca-bundle\") pod \"e1e9ddf6-0f93-4a84-a740-26192235747a\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.317821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-nova-metadata-tls-certs\") pod \"e1e9ddf6-0f93-4a84-a740-26192235747a\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.317891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-config-data\") pod \"e1e9ddf6-0f93-4a84-a740-26192235747a\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.318032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1e9ddf6-0f93-4a84-a740-26192235747a-logs\") pod \"e1e9ddf6-0f93-4a84-a740-26192235747a\" (UID: \"e1e9ddf6-0f93-4a84-a740-26192235747a\") " Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.318792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e9ddf6-0f93-4a84-a740-26192235747a-logs" (OuterVolumeSpecName: "logs") pod "e1e9ddf6-0f93-4a84-a740-26192235747a" (UID: "e1e9ddf6-0f93-4a84-a740-26192235747a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.331877 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e9ddf6-0f93-4a84-a740-26192235747a-kube-api-access-w5qpf" (OuterVolumeSpecName: "kube-api-access-w5qpf") pod "e1e9ddf6-0f93-4a84-a740-26192235747a" (UID: "e1e9ddf6-0f93-4a84-a740-26192235747a"). InnerVolumeSpecName "kube-api-access-w5qpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.345991 4858 scope.go:117] "RemoveContainer" containerID="bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.376014 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1e9ddf6-0f93-4a84-a740-26192235747a" (UID: "e1e9ddf6-0f93-4a84-a740-26192235747a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.383251 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-config-data" (OuterVolumeSpecName: "config-data") pod "e1e9ddf6-0f93-4a84-a740-26192235747a" (UID: "e1e9ddf6-0f93-4a84-a740-26192235747a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.424304 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5qpf\" (UniqueName: \"kubernetes.io/projected/e1e9ddf6-0f93-4a84-a740-26192235747a-kube-api-access-w5qpf\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.424365 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.424381 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.424390 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1e9ddf6-0f93-4a84-a740-26192235747a-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.445737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e1e9ddf6-0f93-4a84-a740-26192235747a" (UID: "e1e9ddf6-0f93-4a84-a740-26192235747a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.466921 4858 scope.go:117] "RemoveContainer" containerID="d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08" Jan 27 20:32:57 crc kubenswrapper[4858]: E0127 20:32:57.467624 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08\": container with ID starting with d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08 not found: ID does not exist" containerID="d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.467660 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08"} err="failed to get container status \"d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08\": rpc error: code = NotFound desc = could not find container \"d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08\": container with ID starting with d5387d083b655d0e6133eea9c1f7b8176abfefbb671428f5aa2091a6af19fb08 not found: ID does not exist" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.467685 4858 scope.go:117] "RemoveContainer" containerID="bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1" Jan 27 20:32:57 crc kubenswrapper[4858]: E0127 20:32:57.467996 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1\": container with ID starting with bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1 not found: ID does not exist" containerID="bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.468023 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1"} err="failed to get container status \"bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1\": rpc error: code = NotFound desc = could not find container \"bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1\": container with ID starting with bb39d993f59af6de844284a6647d6465e9e1d088423f0911fdd349f27a7c5aa1 not found: ID does not exist" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.527134 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e9ddf6-0f93-4a84-a740-26192235747a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.641847 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.654687 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.673324 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:57 crc kubenswrapper[4858]: E0127 20:32:57.673921 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-metadata" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.673948 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-metadata" Jan 27 20:32:57 crc kubenswrapper[4858]: E0127 20:32:57.673965 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerName="init" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.673975 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerName="init" Jan 27 20:32:57 crc kubenswrapper[4858]: E0127 20:32:57.674004 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-log" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.674013 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-log" Jan 27 20:32:57 crc kubenswrapper[4858]: E0127 20:32:57.674056 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee8824c1-03d3-4583-808c-c308867369e5" containerName="nova-manage" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.674065 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee8824c1-03d3-4583-808c-c308867369e5" containerName="nova-manage" Jan 27 20:32:57 crc kubenswrapper[4858]: E0127 20:32:57.674081 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerName="dnsmasq-dns" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.674089 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerName="dnsmasq-dns" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.674337 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee8824c1-03d3-4583-808c-c308867369e5" containerName="nova-manage" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.674361 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-log" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.674373 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" containerName="nova-metadata-metadata" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.674391 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc8fbce-1e3c-4c60-a46b-1ca2220c5f06" containerName="dnsmasq-dns" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.675954 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.678759 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.678990 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.685131 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.731967 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.732068 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-logs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.732409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.732997 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npzzs\" (UniqueName: \"kubernetes.io/projected/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-kube-api-access-npzzs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.733174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-config-data\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.835541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-config-data\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.835665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.835710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-logs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.835781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.835846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npzzs\" (UniqueName: \"kubernetes.io/projected/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-kube-api-access-npzzs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.836329 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-logs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.839815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-config-data\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.840110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.840628 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:57 crc kubenswrapper[4858]: I0127 20:32:57.858233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npzzs\" (UniqueName: \"kubernetes.io/projected/2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24-kube-api-access-npzzs\") pod \"nova-metadata-0\" (UID: \"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24\") " pod="openstack/nova-metadata-0" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.007164 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.092364 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e9ddf6-0f93-4a84-a740-26192235747a" path="/var/lib/kubelet/pods/e1e9ddf6-0f93-4a84-a740-26192235747a/volumes" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.529781 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 20:32:58 crc kubenswrapper[4858]: W0127 20:32:58.540493 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a9b68aa_8d04_462e_9d8a_0c0bfa73dc24.slice/crio-9380198465a69e8ad6a6ba04187ba09dbad66051e75bab8ae62a236311c6d3a2 WatchSource:0}: Error finding container 9380198465a69e8ad6a6ba04187ba09dbad66051e75bab8ae62a236311c6d3a2: Status 404 returned error can't find the container with id 9380198465a69e8ad6a6ba04187ba09dbad66051e75bab8ae62a236311c6d3a2 Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.675772 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.757319 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-combined-ca-bundle\") pod \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.757769 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-config-data\") pod \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.757809 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-logs\") pod \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.757840 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-public-tls-certs\") pod \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.757946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-internal-tls-certs\") pod \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.758068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqglf\" (UniqueName: \"kubernetes.io/projected/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-kube-api-access-wqglf\") pod \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\" (UID: \"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1\") " Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.763149 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-logs" (OuterVolumeSpecName: "logs") pod "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" (UID: "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.766771 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-kube-api-access-wqglf" (OuterVolumeSpecName: "kube-api-access-wqglf") pod "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" (UID: "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1"). InnerVolumeSpecName "kube-api-access-wqglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.825726 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-config-data" (OuterVolumeSpecName: "config-data") pod "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" (UID: "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.842595 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" (UID: "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.849811 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" (UID: "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.861228 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.861361 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.861433 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.861502 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.861635 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqglf\" (UniqueName: \"kubernetes.io/projected/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-kube-api-access-wqglf\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.861907 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" (UID: "0814fb48-7f7d-4ebd-a3f1-f7017387a3e1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:58 crc kubenswrapper[4858]: E0127 20:32:58.877573 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef is running failed: container process not found" containerID="bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 20:32:58 crc kubenswrapper[4858]: E0127 20:32:58.878179 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef is running failed: container process not found" containerID="bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 20:32:58 crc kubenswrapper[4858]: E0127 20:32:58.878501 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef is running failed: container process not found" containerID="bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 20:32:58 crc kubenswrapper[4858]: E0127 20:32:58.878636 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" containerName="nova-scheduler-scheduler" Jan 27 20:32:58 crc kubenswrapper[4858]: I0127 20:32:58.963700 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.022963 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.172497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-config-data\") pod \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.172594 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-combined-ca-bundle\") pod \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.172818 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wql4h\" (UniqueName: \"kubernetes.io/projected/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-kube-api-access-wql4h\") pod \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\" (UID: \"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6\") " Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.194851 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-kube-api-access-wql4h" (OuterVolumeSpecName: "kube-api-access-wql4h") pod "1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" (UID: "1f8407b6-e0c9-4cb2-b150-e505dc4f63b6"). InnerVolumeSpecName "kube-api-access-wql4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.248776 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-config-data" (OuterVolumeSpecName: "config-data") pod "1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" (UID: "1f8407b6-e0c9-4cb2-b150-e505dc4f63b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.278978 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wql4h\" (UniqueName: \"kubernetes.io/projected/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-kube-api-access-wql4h\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.279224 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.282728 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" (UID: "1f8407b6-e0c9-4cb2-b150-e505dc4f63b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.328904 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.329187 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.329351 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" containerID="bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" exitCode=0 Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.329441 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6","Type":"ContainerDied","Data":"bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef"} Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.329564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f8407b6-e0c9-4cb2-b150-e505dc4f63b6","Type":"ContainerDied","Data":"ea9e347a0bd50c590151a28197de70f89d5b97f612f696b57357dbd8f48a8faf"} Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.329674 4858 scope.go:117] "RemoveContainer" containerID="bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.329698 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.335978 4858 generic.go:334] "Generic (PLEG): container finished" podID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerID="68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226" exitCode=0 Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.336161 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.336265 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1","Type":"ContainerDied","Data":"68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226"} Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.336360 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0814fb48-7f7d-4ebd-a3f1-f7017387a3e1","Type":"ContainerDied","Data":"a3a84e2367c9623532bdc56797b38268fd56ea826ad30096c224eb57f3ac89b3"} Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.341420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24","Type":"ContainerStarted","Data":"e10853852a803b55c8f204dce440edb9013069f76adf84371fe39974afb0bc8e"} Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.341471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24","Type":"ContainerStarted","Data":"9380198465a69e8ad6a6ba04187ba09dbad66051e75bab8ae62a236311c6d3a2"} Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.371798 4858 scope.go:117] "RemoveContainer" containerID="bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" Jan 27 20:32:59 crc kubenswrapper[4858]: E0127 20:32:59.376746 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef\": container with ID starting with bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef not found: ID does not exist" containerID="bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.376811 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef"} err="failed to get container status \"bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef\": rpc error: code = NotFound desc = could not find container \"bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef\": container with ID starting with bb12906cebbd9121997c4911edae1b551306a8ccf34aa98128fc0d985b920eef not found: ID does not exist" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.376838 4858 scope.go:117] "RemoveContainer" containerID="68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.383727 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.395834 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.407734 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.419755 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.428730 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.435032 4858 scope.go:117] "RemoveContainer" containerID="558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.444796 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: E0127 20:32:59.445338 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-log" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.445361 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-log" Jan 27 20:32:59 crc kubenswrapper[4858]: E0127 20:32:59.445394 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" containerName="nova-scheduler-scheduler" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.445406 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" containerName="nova-scheduler-scheduler" Jan 27 20:32:59 crc kubenswrapper[4858]: E0127 20:32:59.445418 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-api" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.445426 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-api" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.445711 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-log" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.445735 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" containerName="nova-api-api" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.445756 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" containerName="nova-scheduler-scheduler" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.447210 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.452504 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.453845 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.454165 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.465029 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.476835 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.478888 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.482213 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.484271 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.486142 4858 scope.go:117] "RemoveContainer" containerID="68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226" Jan 27 20:32:59 crc kubenswrapper[4858]: E0127 20:32:59.487062 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226\": container with ID starting with 68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226 not found: ID does not exist" containerID="68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.487090 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226"} err="failed to get container status \"68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226\": rpc error: code = NotFound desc = could not find container \"68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226\": container with ID starting with 68f25368b47f17cadac0b000cec6ad63571e1fed02744dcc986cf69b6584a226 not found: ID does not exist" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.487113 4858 scope.go:117] "RemoveContainer" containerID="558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01" Jan 27 20:32:59 crc kubenswrapper[4858]: E0127 20:32:59.488314 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01\": container with ID starting with 558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01 not found: ID does not exist" containerID="558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.488350 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01"} err="failed to get container status \"558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01\": rpc error: code = NotFound desc = could not find container \"558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01\": container with ID starting with 558cda788518ca28bf6cb21bc65a2c025ca36db44926c6cc5189f9b967acac01 not found: ID does not exist" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1620a5-c040-450a-a149-e7bf421b80d9-config-data\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587354 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-config-data\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4mr\" (UniqueName: \"kubernetes.io/projected/6f1620a5-c040-450a-a149-e7bf421b80d9-kube-api-access-nc4mr\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587792 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1620a5-c040-450a-a149-e7bf421b80d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587873 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b104f9b-a37f-44bc-875f-03ce0d396c57-logs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.587917 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmp9d\" (UniqueName: \"kubernetes.io/projected/4b104f9b-a37f-44bc-875f-03ce0d396c57-kube-api-access-wmp9d\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.588044 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690543 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-config-data\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690585 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc4mr\" (UniqueName: \"kubernetes.io/projected/6f1620a5-c040-450a-a149-e7bf421b80d9-kube-api-access-nc4mr\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1620a5-c040-450a-a149-e7bf421b80d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b104f9b-a37f-44bc-875f-03ce0d396c57-logs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmp9d\" (UniqueName: \"kubernetes.io/projected/4b104f9b-a37f-44bc-875f-03ce0d396c57-kube-api-access-wmp9d\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690738 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.690799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1620a5-c040-450a-a149-e7bf421b80d9-config-data\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.691265 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b104f9b-a37f-44bc-875f-03ce0d396c57-logs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.697148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.697396 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.698025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1620a5-c040-450a-a149-e7bf421b80d9-config-data\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.698600 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.699044 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b104f9b-a37f-44bc-875f-03ce0d396c57-config-data\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.699989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1620a5-c040-450a-a149-e7bf421b80d9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.710249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc4mr\" (UniqueName: \"kubernetes.io/projected/6f1620a5-c040-450a-a149-e7bf421b80d9-kube-api-access-nc4mr\") pod \"nova-scheduler-0\" (UID: \"6f1620a5-c040-450a-a149-e7bf421b80d9\") " pod="openstack/nova-scheduler-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.711895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmp9d\" (UniqueName: \"kubernetes.io/projected/4b104f9b-a37f-44bc-875f-03ce0d396c57-kube-api-access-wmp9d\") pod \"nova-api-0\" (UID: \"4b104f9b-a37f-44bc-875f-03ce0d396c57\") " pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.778001 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 20:32:59 crc kubenswrapper[4858]: I0127 20:32:59.804660 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 20:33:00 crc kubenswrapper[4858]: I0127 20:33:00.083329 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0814fb48-7f7d-4ebd-a3f1-f7017387a3e1" path="/var/lib/kubelet/pods/0814fb48-7f7d-4ebd-a3f1-f7017387a3e1/volumes" Jan 27 20:33:00 crc kubenswrapper[4858]: I0127 20:33:00.084955 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8407b6-e0c9-4cb2-b150-e505dc4f63b6" path="/var/lib/kubelet/pods/1f8407b6-e0c9-4cb2-b150-e505dc4f63b6/volumes" Jan 27 20:33:00 crc kubenswrapper[4858]: I0127 20:33:00.347462 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 20:33:00 crc kubenswrapper[4858]: I0127 20:33:00.358032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b104f9b-a37f-44bc-875f-03ce0d396c57","Type":"ContainerStarted","Data":"2b59434229b5f250e2ddf21ceb5287e23c3b3dd6d0cd7c2e01bf5627480eea77"} Jan 27 20:33:00 crc kubenswrapper[4858]: I0127 20:33:00.363582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24","Type":"ContainerStarted","Data":"d6bfedae60200f6621b2f0c4afdbe89278a90768e0cbff204b74f6b1c9cf49cb"} Jan 27 20:33:00 crc kubenswrapper[4858]: I0127 20:33:00.399034 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.399014796 podStartE2EDuration="3.399014796s" podCreationTimestamp="2026-01-27 20:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:33:00.386810575 +0000 UTC m=+1525.094626291" watchObservedRunningTime="2026-01-27 20:33:00.399014796 +0000 UTC m=+1525.106830502" Jan 27 20:33:00 crc kubenswrapper[4858]: I0127 20:33:00.452518 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 20:33:01 crc kubenswrapper[4858]: I0127 20:33:01.380718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b104f9b-a37f-44bc-875f-03ce0d396c57","Type":"ContainerStarted","Data":"d5e4b62c4ffb9f6aaa209d7c22e09461c9908b6cdd19e85c302035e062362c13"} Jan 27 20:33:01 crc kubenswrapper[4858]: I0127 20:33:01.381177 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b104f9b-a37f-44bc-875f-03ce0d396c57","Type":"ContainerStarted","Data":"407a2b45d6c7557f960805fb75d4b47bb9ce48a15658592f4f227f0522efcf2e"} Jan 27 20:33:01 crc kubenswrapper[4858]: I0127 20:33:01.384296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f1620a5-c040-450a-a149-e7bf421b80d9","Type":"ContainerStarted","Data":"16b9441ff5c15b86cb748389df9b8dd5e0ead4b2bd24bbb4793959454e294764"} Jan 27 20:33:01 crc kubenswrapper[4858]: I0127 20:33:01.384331 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f1620a5-c040-450a-a149-e7bf421b80d9","Type":"ContainerStarted","Data":"cd2faf34041c664bfa0ea45348a6bfabf0eefab892ca8d7c70fd242e5382171d"} Jan 27 20:33:01 crc kubenswrapper[4858]: I0127 20:33:01.414883 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.414853721 podStartE2EDuration="2.414853721s" podCreationTimestamp="2026-01-27 20:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:33:01.407794078 +0000 UTC m=+1526.115609784" watchObservedRunningTime="2026-01-27 20:33:01.414853721 +0000 UTC m=+1526.122669427" Jan 27 20:33:01 crc kubenswrapper[4858]: I0127 20:33:01.426505 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.426485126 podStartE2EDuration="2.426485126s" podCreationTimestamp="2026-01-27 20:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:33:01.423262103 +0000 UTC m=+1526.131077809" watchObservedRunningTime="2026-01-27 20:33:01.426485126 +0000 UTC m=+1526.134300832" Jan 27 20:33:03 crc kubenswrapper[4858]: I0127 20:33:03.007417 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:33:03 crc kubenswrapper[4858]: I0127 20:33:03.007850 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 20:33:04 crc kubenswrapper[4858]: I0127 20:33:04.805214 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 20:33:08 crc kubenswrapper[4858]: I0127 20:33:08.007716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:33:08 crc kubenswrapper[4858]: I0127 20:33:08.009851 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 20:33:09 crc kubenswrapper[4858]: I0127 20:33:09.024873 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:33:09 crc kubenswrapper[4858]: I0127 20:33:09.024912 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:33:09 crc kubenswrapper[4858]: I0127 20:33:09.779339 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:33:09 crc kubenswrapper[4858]: I0127 20:33:09.779871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 20:33:09 crc kubenswrapper[4858]: I0127 20:33:09.805685 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 20:33:09 crc kubenswrapper[4858]: I0127 20:33:09.852880 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 20:33:10 crc kubenswrapper[4858]: I0127 20:33:10.560171 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 20:33:10 crc kubenswrapper[4858]: I0127 20:33:10.791784 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b104f9b-a37f-44bc-875f-03ce0d396c57" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:33:10 crc kubenswrapper[4858]: I0127 20:33:10.791863 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b104f9b-a37f-44bc-875f-03ce0d396c57" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 20:33:16 crc kubenswrapper[4858]: I0127 20:33:16.731084 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 20:33:18 crc kubenswrapper[4858]: I0127 20:33:18.014297 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:33:18 crc kubenswrapper[4858]: I0127 20:33:18.015057 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 20:33:18 crc kubenswrapper[4858]: I0127 20:33:18.030895 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:33:18 crc kubenswrapper[4858]: I0127 20:33:18.627305 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 20:33:19 crc kubenswrapper[4858]: I0127 20:33:19.787862 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:33:19 crc kubenswrapper[4858]: I0127 20:33:19.788682 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:33:19 crc kubenswrapper[4858]: I0127 20:33:19.789050 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 20:33:19 crc kubenswrapper[4858]: I0127 20:33:19.804108 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:33:20 crc kubenswrapper[4858]: I0127 20:33:20.639510 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 20:33:20 crc kubenswrapper[4858]: I0127 20:33:20.656167 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.329180 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.329946 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.330002 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.331088 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.331148 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" gracePeriod=600 Jan 27 20:33:29 crc kubenswrapper[4858]: E0127 20:33:29.471969 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.733571 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" exitCode=0 Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.733649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8"} Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.733746 4858 scope.go:117] "RemoveContainer" containerID="96d22823cb85c08d62e23a9b28d554ba658642fe85f23bff7568ba66ed62f3ed" Jan 27 20:33:29 crc kubenswrapper[4858]: I0127 20:33:29.735293 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:33:29 crc kubenswrapper[4858]: E0127 20:33:29.735829 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:33:30 crc kubenswrapper[4858]: I0127 20:33:30.448137 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:33:31 crc kubenswrapper[4858]: I0127 20:33:31.450652 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:33:34 crc kubenswrapper[4858]: I0127 20:33:34.333122 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="rabbitmq" containerID="cri-o://bcb1d6cdad9834a8ca239bc0e4bd61fa15a8e10dc74226d26466495bc626a052" gracePeriod=604797 Jan 27 20:33:35 crc kubenswrapper[4858]: I0127 20:33:35.055942 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 27 20:33:35 crc kubenswrapper[4858]: I0127 20:33:35.241627 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ad881410-229a-4427-862b-8febd0e5ab61" containerName="rabbitmq" containerID="cri-o://bbe9d09c4a05d23f65ac7dfee7aee83648557a8f6299c8b7af77c260fc6b0d14" gracePeriod=604797 Jan 27 20:33:35 crc kubenswrapper[4858]: I0127 20:33:35.825759 4858 generic.go:334] "Generic (PLEG): container finished" podID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerID="bcb1d6cdad9834a8ca239bc0e4bd61fa15a8e10dc74226d26466495bc626a052" exitCode=0 Jan 27 20:33:35 crc kubenswrapper[4858]: I0127 20:33:35.826055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2","Type":"ContainerDied","Data":"bcb1d6cdad9834a8ca239bc0e4bd61fa15a8e10dc74226d26466495bc626a052"} Jan 27 20:33:35 crc kubenswrapper[4858]: I0127 20:33:35.826086 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2","Type":"ContainerDied","Data":"320b2cb1f3771689c2a0130fa8366cf8099a73e035ba71f3bed62bebd36d86c4"} Jan 27 20:33:35 crc kubenswrapper[4858]: I0127 20:33:35.826098 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320b2cb1f3771689c2a0130fa8366cf8099a73e035ba71f3bed62bebd36d86c4" Jan 27 20:33:35 crc kubenswrapper[4858]: I0127 20:33:35.930307 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067425 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067579 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-erlang-cookie-secret\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067635 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-tls\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-pod-info\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067834 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-erlang-cookie\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067882 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9h4j\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-kube-api-access-b9h4j\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-config-data\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.067995 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-server-conf\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.068042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-plugins\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.068092 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-plugins-conf\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.068115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-confd\") pod \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\" (UID: \"825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2\") " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.070483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.082899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.083357 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.108842 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.108870 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.109074 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.117183 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-kube-api-access-b9h4j" (OuterVolumeSpecName: "kube-api-access-b9h4j") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "kube-api-access-b9h4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.158302 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-pod-info" (OuterVolumeSpecName: "pod-info") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172304 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172340 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172352 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9h4j\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-kube-api-access-b9h4j\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172363 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172373 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172399 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172413 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.172427 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.175136 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-config-data" (OuterVolumeSpecName: "config-data") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.203375 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-server-conf" (OuterVolumeSpecName: "server-conf") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.231178 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.274771 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.274821 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.274835 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.319517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" (UID: "825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.377441 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.840927 4858 generic.go:334] "Generic (PLEG): container finished" podID="ad881410-229a-4427-862b-8febd0e5ab61" containerID="bbe9d09c4a05d23f65ac7dfee7aee83648557a8f6299c8b7af77c260fc6b0d14" exitCode=0 Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.841348 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.841110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad881410-229a-4427-862b-8febd0e5ab61","Type":"ContainerDied","Data":"bbe9d09c4a05d23f65ac7dfee7aee83648557a8f6299c8b7af77c260fc6b0d14"} Jan 27 20:33:36 crc kubenswrapper[4858]: I0127 20:33:36.964513 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.075518 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.094596 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.109535 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-plugins-conf\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.109829 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-plugins\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.109991 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.110146 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-server-conf\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.110256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad881410-229a-4427-862b-8febd0e5ab61-pod-info\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.110466 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-tls\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.110647 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-config-data\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.110766 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-erlang-cookie\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.110879 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad881410-229a-4427-862b-8febd0e5ab61-erlang-cookie-secret\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.110957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffbht\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-kube-api-access-ffbht\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.111038 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-confd\") pod \"ad881410-229a-4427-862b-8febd0e5ab61\" (UID: \"ad881410-229a-4427-862b-8febd0e5ab61\") " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.111447 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.111824 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.112074 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.112205 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.112148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.118000 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ad881410-229a-4427-862b-8febd0e5ab61-pod-info" (OuterVolumeSpecName: "pod-info") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.118033 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.118243 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-kube-api-access-ffbht" (OuterVolumeSpecName: "kube-api-access-ffbht") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "kube-api-access-ffbht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.122358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.135678 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:33:37 crc kubenswrapper[4858]: E0127 20:33:37.136262 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="rabbitmq" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.136284 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="rabbitmq" Jan 27 20:33:37 crc kubenswrapper[4858]: E0127 20:33:37.136302 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="setup-container" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.136310 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="setup-container" Jan 27 20:33:37 crc kubenswrapper[4858]: E0127 20:33:37.136324 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad881410-229a-4427-862b-8febd0e5ab61" containerName="setup-container" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.136332 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad881410-229a-4427-862b-8febd0e5ab61" containerName="setup-container" Jan 27 20:33:37 crc kubenswrapper[4858]: E0127 20:33:37.136367 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad881410-229a-4427-862b-8febd0e5ab61" containerName="rabbitmq" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.136374 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad881410-229a-4427-862b-8febd0e5ab61" containerName="rabbitmq" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.136633 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" containerName="rabbitmq" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.136659 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad881410-229a-4427-862b-8febd0e5ab61" containerName="rabbitmq" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.140156 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.144204 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.144747 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.144911 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.145005 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.145263 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-m692p" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.149683 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.153704 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad881410-229a-4427-862b-8febd0e5ab61-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.155439 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.161631 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.174613 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-config-data" (OuterVolumeSpecName: "config-data") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.200301 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-server-conf" (OuterVolumeSpecName: "server-conf") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.219448 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.221264 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ad881410-229a-4427-862b-8febd0e5ab61-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.221299 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffbht\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-kube-api-access-ffbht\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.221324 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.221335 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.221345 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ad881410-229a-4427-862b-8febd0e5ab61-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.221354 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.221364 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ad881410-229a-4427-862b-8febd0e5ab61-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.253932 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.324290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.324385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcq2h\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-kube-api-access-wcq2h\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.324436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.324585 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.324904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-config-data\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.325028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.325076 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.325117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.325149 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.325196 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.325288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.325582 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.332171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ad881410-229a-4427-862b-8febd0e5ab61" (UID: "ad881410-229a-4427-862b-8febd0e5ab61"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428243 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcq2h\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-kube-api-access-wcq2h\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428402 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-config-data\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428503 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428603 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.428718 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ad881410-229a-4427-862b-8febd0e5ab61-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.429426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.429603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.429880 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.430791 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-config-data\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.431087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.431171 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.433401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.433604 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.433784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.434074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.448151 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcq2h\" (UniqueName: \"kubernetes.io/projected/e61ce5ac-61b7-41f3-aab6-c4b2e03978d1-kube-api-access-wcq2h\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.473832 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1\") " pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.544608 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.863223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ad881410-229a-4427-862b-8febd0e5ab61","Type":"ContainerDied","Data":"573448dbdd215dea023daed0731040741e850d68a1f94dd36e5181061dec8d1a"} Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.863306 4858 scope.go:117] "RemoveContainer" containerID="bbe9d09c4a05d23f65ac7dfee7aee83648557a8f6299c8b7af77c260fc6b0d14" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.863353 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.914624 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.924058 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.927166 4858 scope.go:117] "RemoveContainer" containerID="ca76863e730916538ae7127ed44c1cadecfed9ca3f49b484cc25424b7224480b" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.953027 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.956190 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.963217 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.963360 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.963367 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8hkvj" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.963573 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.963880 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.964428 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 20:33:37 crc kubenswrapper[4858]: I0127 20:33:37.964912 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.016479 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.060127 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.084781 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2" path="/var/lib/kubelet/pods/825c6dc0-6dc8-4c7b-a5d4-daf28c00cac2/volumes" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.086241 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad881410-229a-4427-862b-8febd0e5ab61" path="/var/lib/kubelet/pods/ad881410-229a-4427-862b-8febd0e5ab61/volumes" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146483 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146544 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146583 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv282\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-kube-api-access-xv282\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.146769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249494 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv282\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-kube-api-access-xv282\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.249848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.250676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.250858 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.250957 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.251066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.251419 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.251892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.255776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.256762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.257294 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.258146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.271627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv282\" (UniqueName: \"kubernetes.io/projected/d8aaed51-c0b1-4242-8d7b-a4256539e2ea-kube-api-access-xv282\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.300064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d8aaed51-c0b1-4242-8d7b-a4256539e2ea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.330957 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.876985 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1","Type":"ContainerStarted","Data":"63593c48b16a7eff0de557c56ee3182a7a0c3537c0b457f45a5bef805cbfc6d9"} Jan 27 20:33:38 crc kubenswrapper[4858]: I0127 20:33:38.929485 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 20:33:39 crc kubenswrapper[4858]: I0127 20:33:39.892840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8aaed51-c0b1-4242-8d7b-a4256539e2ea","Type":"ContainerStarted","Data":"56b1ecf0882f65a3e230bce729b53f25aa62433022d749c5ea7c405f927e3fbd"} Jan 27 20:33:40 crc kubenswrapper[4858]: I0127 20:33:40.905712 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1","Type":"ContainerStarted","Data":"15b15aa1de9cea4bf90b76986a4a8963c0a79378b1c3dd5a38dca13a2540ce43"} Jan 27 20:33:41 crc kubenswrapper[4858]: I0127 20:33:41.918790 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8aaed51-c0b1-4242-8d7b-a4256539e2ea","Type":"ContainerStarted","Data":"85d40f12678e7a415296db92bb388501c252f4f8f92a7ae33e7cdf339a9c934c"} Jan 27 20:33:43 crc kubenswrapper[4858]: I0127 20:33:43.070707 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:33:43 crc kubenswrapper[4858]: E0127 20:33:43.071243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.200502 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bc559fd99-qf8dl"] Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.203758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.210781 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.215583 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bc559fd99-qf8dl"] Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.293223 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.293320 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-svc\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.293370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.293432 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9b8q\" (UniqueName: \"kubernetes.io/projected/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-kube-api-access-s9b8q\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.293487 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.293623 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-config\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.294061 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.396949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.397227 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.397364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-svc\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.397399 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.397457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9b8q\" (UniqueName: \"kubernetes.io/projected/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-kube-api-access-s9b8q\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.397524 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.397637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-config\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.398812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.398855 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-svc\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.399098 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.399121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.399779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.399870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-config\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.432947 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9b8q\" (UniqueName: \"kubernetes.io/projected/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-kube-api-access-s9b8q\") pod \"dnsmasq-dns-5bc559fd99-qf8dl\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:47 crc kubenswrapper[4858]: I0127 20:33:47.536067 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:48 crc kubenswrapper[4858]: I0127 20:33:48.053786 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bc559fd99-qf8dl"] Jan 27 20:33:48 crc kubenswrapper[4858]: I0127 20:33:48.989419 4858 generic.go:334] "Generic (PLEG): container finished" podID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerID="d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2" exitCode=0 Jan 27 20:33:48 crc kubenswrapper[4858]: I0127 20:33:48.989497 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" event={"ID":"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c","Type":"ContainerDied","Data":"d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2"} Jan 27 20:33:48 crc kubenswrapper[4858]: I0127 20:33:48.989854 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" event={"ID":"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c","Type":"ContainerStarted","Data":"74891aff32bf87fc922639141d48d76e18329adcdf38663d1ff74d63407d2d96"} Jan 27 20:33:50 crc kubenswrapper[4858]: I0127 20:33:50.007514 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" event={"ID":"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c","Type":"ContainerStarted","Data":"81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a"} Jan 27 20:33:50 crc kubenswrapper[4858]: I0127 20:33:50.008085 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:50 crc kubenswrapper[4858]: I0127 20:33:50.049908 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" podStartSLOduration=3.04986887 podStartE2EDuration="3.04986887s" podCreationTimestamp="2026-01-27 20:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:33:50.036807514 +0000 UTC m=+1574.744623290" watchObservedRunningTime="2026-01-27 20:33:50.04986887 +0000 UTC m=+1574.757684616" Jan 27 20:33:51 crc kubenswrapper[4858]: I0127 20:33:51.779489 4858 scope.go:117] "RemoveContainer" containerID="e78c85a4de02a8c6ba4a8fe7344f10a97b7b8b0bf8563bfcc3ae7c30ff0bbd9b" Jan 27 20:33:51 crc kubenswrapper[4858]: I0127 20:33:51.814164 4858 scope.go:117] "RemoveContainer" containerID="a7f9036c77e96dfe20e59ced87acad06df172066fcc8ff6ae5ba1b818cc4ed32" Jan 27 20:33:51 crc kubenswrapper[4858]: I0127 20:33:51.874739 4858 scope.go:117] "RemoveContainer" containerID="b439e35aa3ebaa567247e0fa57cfdd25a6b0cef090835b9e2bb45d1e2b49fc66" Jan 27 20:33:51 crc kubenswrapper[4858]: I0127 20:33:51.917982 4858 scope.go:117] "RemoveContainer" containerID="3d879a27d8a6768144d1dc49555b88f49e95ccd17d8edb36bf06e54c7228c71c" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.071461 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:33:57 crc kubenswrapper[4858]: E0127 20:33:57.072744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.538038 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.612443 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb8cf77bc-8xnvj"] Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.613308 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" podUID="90f1e30f-2381-470e-9465-4d30253d91c7" containerName="dnsmasq-dns" containerID="cri-o://fd780543b236bc836f5e15fb636f7cbca6813533c8ea7132fd7eabcf6dd0bf2c" gracePeriod=10 Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.803679 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-694549759-h5nzn"] Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.805924 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.818751 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694549759-h5nzn"] Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.988515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-ovsdbserver-nb\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.988589 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-dns-swift-storage-0\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.988611 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-ovsdbserver-sb\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.988667 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-config\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.988685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-openstack-edpm-ipam\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.988750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7t58\" (UniqueName: \"kubernetes.io/projected/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-kube-api-access-q7t58\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:57 crc kubenswrapper[4858]: I0127 20:33:57.988786 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-dns-svc\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.096002 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7t58\" (UniqueName: \"kubernetes.io/projected/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-kube-api-access-q7t58\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.096100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-dns-svc\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.096177 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-ovsdbserver-nb\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.096204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-dns-swift-storage-0\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.096226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-ovsdbserver-sb\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.096274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-config\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.096291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-openstack-edpm-ipam\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.097484 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-openstack-edpm-ipam\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.098410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-dns-svc\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.100482 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-ovsdbserver-sb\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.101072 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-dns-swift-storage-0\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.101635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-config\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.108153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-ovsdbserver-nb\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.132368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7t58\" (UniqueName: \"kubernetes.io/projected/d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e-kube-api-access-q7t58\") pod \"dnsmasq-dns-694549759-h5nzn\" (UID: \"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e\") " pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.136161 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.178593 4858 generic.go:334] "Generic (PLEG): container finished" podID="90f1e30f-2381-470e-9465-4d30253d91c7" containerID="fd780543b236bc836f5e15fb636f7cbca6813533c8ea7132fd7eabcf6dd0bf2c" exitCode=0 Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.180055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" event={"ID":"90f1e30f-2381-470e-9465-4d30253d91c7","Type":"ContainerDied","Data":"fd780543b236bc836f5e15fb636f7cbca6813533c8ea7132fd7eabcf6dd0bf2c"} Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.295170 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.401670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-sb\") pod \"90f1e30f-2381-470e-9465-4d30253d91c7\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.401705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvhmm\" (UniqueName: \"kubernetes.io/projected/90f1e30f-2381-470e-9465-4d30253d91c7-kube-api-access-cvhmm\") pod \"90f1e30f-2381-470e-9465-4d30253d91c7\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.402094 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-swift-storage-0\") pod \"90f1e30f-2381-470e-9465-4d30253d91c7\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.402226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-config\") pod \"90f1e30f-2381-470e-9465-4d30253d91c7\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.402255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-svc\") pod \"90f1e30f-2381-470e-9465-4d30253d91c7\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.402344 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-nb\") pod \"90f1e30f-2381-470e-9465-4d30253d91c7\" (UID: \"90f1e30f-2381-470e-9465-4d30253d91c7\") " Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.430777 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f1e30f-2381-470e-9465-4d30253d91c7-kube-api-access-cvhmm" (OuterVolumeSpecName: "kube-api-access-cvhmm") pod "90f1e30f-2381-470e-9465-4d30253d91c7" (UID: "90f1e30f-2381-470e-9465-4d30253d91c7"). InnerVolumeSpecName "kube-api-access-cvhmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.469030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90f1e30f-2381-470e-9465-4d30253d91c7" (UID: "90f1e30f-2381-470e-9465-4d30253d91c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.472489 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90f1e30f-2381-470e-9465-4d30253d91c7" (UID: "90f1e30f-2381-470e-9465-4d30253d91c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.478734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-config" (OuterVolumeSpecName: "config") pod "90f1e30f-2381-470e-9465-4d30253d91c7" (UID: "90f1e30f-2381-470e-9465-4d30253d91c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.484500 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90f1e30f-2381-470e-9465-4d30253d91c7" (UID: "90f1e30f-2381-470e-9465-4d30253d91c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.486009 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90f1e30f-2381-470e-9465-4d30253d91c7" (UID: "90f1e30f-2381-470e-9465-4d30253d91c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.504961 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.504998 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.505007 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.505017 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.505027 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvhmm\" (UniqueName: \"kubernetes.io/projected/90f1e30f-2381-470e-9465-4d30253d91c7-kube-api-access-cvhmm\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.505036 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90f1e30f-2381-470e-9465-4d30253d91c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:33:58 crc kubenswrapper[4858]: I0127 20:33:58.713731 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-694549759-h5nzn"] Jan 27 20:33:59 crc kubenswrapper[4858]: I0127 20:33:59.196676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" event={"ID":"90f1e30f-2381-470e-9465-4d30253d91c7","Type":"ContainerDied","Data":"b3f3b5a7e94bbff8f6b44da4e2c85d8568019bda57722d8f5e8a6763dad716b7"} Jan 27 20:33:59 crc kubenswrapper[4858]: I0127 20:33:59.198307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694549759-h5nzn" event={"ID":"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e","Type":"ContainerStarted","Data":"1e50cc9ab0506822223b3e4c62c8f4cdbc1eb49d76c26a762b8d326be0a710df"} Jan 27 20:33:59 crc kubenswrapper[4858]: I0127 20:33:59.196696 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb8cf77bc-8xnvj" Jan 27 20:33:59 crc kubenswrapper[4858]: I0127 20:33:59.198345 4858 scope.go:117] "RemoveContainer" containerID="fd780543b236bc836f5e15fb636f7cbca6813533c8ea7132fd7eabcf6dd0bf2c" Jan 27 20:33:59 crc kubenswrapper[4858]: I0127 20:33:59.244092 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb8cf77bc-8xnvj"] Jan 27 20:33:59 crc kubenswrapper[4858]: I0127 20:33:59.253836 4858 scope.go:117] "RemoveContainer" containerID="2dbf29fdc6d8ee3347bba154276bcd4fad75082632ee5961381463249d72166b" Jan 27 20:33:59 crc kubenswrapper[4858]: I0127 20:33:59.258124 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fb8cf77bc-8xnvj"] Jan 27 20:34:00 crc kubenswrapper[4858]: I0127 20:34:00.087420 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f1e30f-2381-470e-9465-4d30253d91c7" path="/var/lib/kubelet/pods/90f1e30f-2381-470e-9465-4d30253d91c7/volumes" Jan 27 20:34:00 crc kubenswrapper[4858]: I0127 20:34:00.211893 4858 generic.go:334] "Generic (PLEG): container finished" podID="d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e" containerID="27a7afc1ddb57bbe68da16e6033459aa2e00153ea41c3f908d42929aa7c39a2a" exitCode=0 Jan 27 20:34:00 crc kubenswrapper[4858]: I0127 20:34:00.211951 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694549759-h5nzn" event={"ID":"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e","Type":"ContainerDied","Data":"27a7afc1ddb57bbe68da16e6033459aa2e00153ea41c3f908d42929aa7c39a2a"} Jan 27 20:34:01 crc kubenswrapper[4858]: I0127 20:34:01.224014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-694549759-h5nzn" event={"ID":"d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e","Type":"ContainerStarted","Data":"8c91e3c23027b1e13ac57e0852839feb69c72ba43b143c682c0814723cf9f695"} Jan 27 20:34:01 crc kubenswrapper[4858]: I0127 20:34:01.224398 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:34:01 crc kubenswrapper[4858]: I0127 20:34:01.250208 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-694549759-h5nzn" podStartSLOduration=4.250186768 podStartE2EDuration="4.250186768s" podCreationTimestamp="2026-01-27 20:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:34:01.239617694 +0000 UTC m=+1585.947433410" watchObservedRunningTime="2026-01-27 20:34:01.250186768 +0000 UTC m=+1585.958002474" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.138762 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-694549759-h5nzn" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.205851 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bc559fd99-qf8dl"] Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.206099 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" podUID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerName="dnsmasq-dns" containerID="cri-o://81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a" gracePeriod=10 Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.732606 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.777029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-swift-storage-0\") pod \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.777096 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-config\") pod \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.777164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-sb\") pod \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.777258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-nb\") pod \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.777371 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-svc\") pod \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.777588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-openstack-edpm-ipam\") pod \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.777659 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9b8q\" (UniqueName: \"kubernetes.io/projected/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-kube-api-access-s9b8q\") pod \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\" (UID: \"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c\") " Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.802442 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-kube-api-access-s9b8q" (OuterVolumeSpecName: "kube-api-access-s9b8q") pod "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" (UID: "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c"). InnerVolumeSpecName "kube-api-access-s9b8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.845792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" (UID: "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.864658 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" (UID: "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.865351 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" (UID: "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.865942 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" (UID: "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.880601 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-config" (OuterVolumeSpecName: "config") pod "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" (UID: "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.881415 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.881442 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.881458 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.881472 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9b8q\" (UniqueName: \"kubernetes.io/projected/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-kube-api-access-s9b8q\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.881485 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.881494 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.894181 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" (UID: "557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:34:08 crc kubenswrapper[4858]: I0127 20:34:08.982926 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.320777 4858 generic.go:334] "Generic (PLEG): container finished" podID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerID="81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a" exitCode=0 Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.320831 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" event={"ID":"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c","Type":"ContainerDied","Data":"81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a"} Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.320863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" event={"ID":"557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c","Type":"ContainerDied","Data":"74891aff32bf87fc922639141d48d76e18329adcdf38663d1ff74d63407d2d96"} Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.320889 4858 scope.go:117] "RemoveContainer" containerID="81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a" Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.321085 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bc559fd99-qf8dl" Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.391208 4858 scope.go:117] "RemoveContainer" containerID="d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2" Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.393453 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bc559fd99-qf8dl"] Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.405062 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bc559fd99-qf8dl"] Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.424334 4858 scope.go:117] "RemoveContainer" containerID="81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a" Jan 27 20:34:09 crc kubenswrapper[4858]: E0127 20:34:09.425474 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a\": container with ID starting with 81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a not found: ID does not exist" containerID="81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a" Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.425534 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a"} err="failed to get container status \"81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a\": rpc error: code = NotFound desc = could not find container \"81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a\": container with ID starting with 81cfe3e4c3bb229b7e1ee915e1994fe5da8cbb2f84f0053d2d2131899f894f5a not found: ID does not exist" Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.425590 4858 scope.go:117] "RemoveContainer" containerID="d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2" Jan 27 20:34:09 crc kubenswrapper[4858]: E0127 20:34:09.426071 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2\": container with ID starting with d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2 not found: ID does not exist" containerID="d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2" Jan 27 20:34:09 crc kubenswrapper[4858]: I0127 20:34:09.426128 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2"} err="failed to get container status \"d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2\": rpc error: code = NotFound desc = could not find container \"d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2\": container with ID starting with d9c2ed3b029a0d673089e43bdca3715333fcd0ddeb76754aa94961b401632fa2 not found: ID does not exist" Jan 27 20:34:10 crc kubenswrapper[4858]: I0127 20:34:10.071430 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:34:10 crc kubenswrapper[4858]: E0127 20:34:10.071724 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:34:10 crc kubenswrapper[4858]: I0127 20:34:10.083435 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" path="/var/lib/kubelet/pods/557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c/volumes" Jan 27 20:34:13 crc kubenswrapper[4858]: I0127 20:34:13.373281 4858 generic.go:334] "Generic (PLEG): container finished" podID="e61ce5ac-61b7-41f3-aab6-c4b2e03978d1" containerID="15b15aa1de9cea4bf90b76986a4a8963c0a79378b1c3dd5a38dca13a2540ce43" exitCode=0 Jan 27 20:34:13 crc kubenswrapper[4858]: I0127 20:34:13.373422 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1","Type":"ContainerDied","Data":"15b15aa1de9cea4bf90b76986a4a8963c0a79378b1c3dd5a38dca13a2540ce43"} Jan 27 20:34:14 crc kubenswrapper[4858]: I0127 20:34:14.389167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e61ce5ac-61b7-41f3-aab6-c4b2e03978d1","Type":"ContainerStarted","Data":"9595b78b2c878dc573b525565d580f1d329b35570801066f51803d8228282421"} Jan 27 20:34:14 crc kubenswrapper[4858]: I0127 20:34:14.390031 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 20:34:14 crc kubenswrapper[4858]: I0127 20:34:14.391570 4858 generic.go:334] "Generic (PLEG): container finished" podID="d8aaed51-c0b1-4242-8d7b-a4256539e2ea" containerID="85d40f12678e7a415296db92bb388501c252f4f8f92a7ae33e7cdf339a9c934c" exitCode=0 Jan 27 20:34:14 crc kubenswrapper[4858]: I0127 20:34:14.391583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8aaed51-c0b1-4242-8d7b-a4256539e2ea","Type":"ContainerDied","Data":"85d40f12678e7a415296db92bb388501c252f4f8f92a7ae33e7cdf339a9c934c"} Jan 27 20:34:14 crc kubenswrapper[4858]: I0127 20:34:14.460176 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.460147293 podStartE2EDuration="38.460147293s" podCreationTimestamp="2026-01-27 20:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:34:14.447796917 +0000 UTC m=+1599.155612623" watchObservedRunningTime="2026-01-27 20:34:14.460147293 +0000 UTC m=+1599.167962999" Jan 27 20:34:15 crc kubenswrapper[4858]: I0127 20:34:15.404271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d8aaed51-c0b1-4242-8d7b-a4256539e2ea","Type":"ContainerStarted","Data":"2b7c46bb061add27712c1dcc3ea339ae1dbc214c8524c978b3ccee6d7982d814"} Jan 27 20:34:15 crc kubenswrapper[4858]: I0127 20:34:15.404909 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:34:15 crc kubenswrapper[4858]: I0127 20:34:15.444460 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.444432028 podStartE2EDuration="38.444432028s" podCreationTimestamp="2026-01-27 20:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:34:15.428278573 +0000 UTC m=+1600.136094289" watchObservedRunningTime="2026-01-27 20:34:15.444432028 +0000 UTC m=+1600.152247774" Jan 27 20:34:21 crc kubenswrapper[4858]: I0127 20:34:21.071273 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:34:21 crc kubenswrapper[4858]: E0127 20:34:21.072090 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.104767 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l"] Jan 27 20:34:22 crc kubenswrapper[4858]: E0127 20:34:22.105726 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerName="init" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.105741 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerName="init" Jan 27 20:34:22 crc kubenswrapper[4858]: E0127 20:34:22.105755 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerName="dnsmasq-dns" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.105765 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerName="dnsmasq-dns" Jan 27 20:34:22 crc kubenswrapper[4858]: E0127 20:34:22.105792 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f1e30f-2381-470e-9465-4d30253d91c7" containerName="init" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.105798 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f1e30f-2381-470e-9465-4d30253d91c7" containerName="init" Jan 27 20:34:22 crc kubenswrapper[4858]: E0127 20:34:22.105833 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f1e30f-2381-470e-9465-4d30253d91c7" containerName="dnsmasq-dns" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.105840 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f1e30f-2381-470e-9465-4d30253d91c7" containerName="dnsmasq-dns" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.106306 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="557ee5b8-e7a1-47ce-b1e3-fe7d5cc4fc7c" containerName="dnsmasq-dns" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.106335 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f1e30f-2381-470e-9465-4d30253d91c7" containerName="dnsmasq-dns" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.114050 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l"] Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.114151 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.119907 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.120010 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.120672 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.120731 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.164603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.164790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc2t4\" (UniqueName: \"kubernetes.io/projected/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-kube-api-access-xc2t4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.164825 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.165636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.266381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc2t4\" (UniqueName: \"kubernetes.io/projected/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-kube-api-access-xc2t4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.266439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.266641 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.266714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.273292 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.274041 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.274364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.284797 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc2t4\" (UniqueName: \"kubernetes.io/projected/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-kube-api-access-xc2t4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8454l\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:22 crc kubenswrapper[4858]: I0127 20:34:22.445382 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:23 crc kubenswrapper[4858]: I0127 20:34:23.039490 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l"] Jan 27 20:34:23 crc kubenswrapper[4858]: I0127 20:34:23.491409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" event={"ID":"dfd9ae76-5a01-46af-995d-6fa271c1e3b8","Type":"ContainerStarted","Data":"123ed68119cbee65550d2c4bd5d7a0b05c188277646d5f82ce6f99b293a9d89e"} Jan 27 20:34:27 crc kubenswrapper[4858]: I0127 20:34:27.548353 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e61ce5ac-61b7-41f3-aab6-c4b2e03978d1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.228:5671: connect: connection refused" Jan 27 20:34:28 crc kubenswrapper[4858]: I0127 20:34:28.334101 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d8aaed51-c0b1-4242-8d7b-a4256539e2ea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.229:5671: connect: connection refused" Jan 27 20:34:33 crc kubenswrapper[4858]: I0127 20:34:33.636879 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" event={"ID":"dfd9ae76-5a01-46af-995d-6fa271c1e3b8","Type":"ContainerStarted","Data":"41579751dd60bdefe1da4a6dd31cba7691cb92f74172348f940ee386a4180718"} Jan 27 20:34:33 crc kubenswrapper[4858]: I0127 20:34:33.666273 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" podStartSLOduration=1.852257588 podStartE2EDuration="11.666247861s" podCreationTimestamp="2026-01-27 20:34:22 +0000 UTC" firstStartedPulling="2026-01-27 20:34:23.048942849 +0000 UTC m=+1607.756758555" lastFinishedPulling="2026-01-27 20:34:32.862933092 +0000 UTC m=+1617.570748828" observedRunningTime="2026-01-27 20:34:33.657006882 +0000 UTC m=+1618.364822608" watchObservedRunningTime="2026-01-27 20:34:33.666247861 +0000 UTC m=+1618.374063577" Jan 27 20:34:35 crc kubenswrapper[4858]: I0127 20:34:35.071897 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:34:35 crc kubenswrapper[4858]: E0127 20:34:35.072619 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:34:37 crc kubenswrapper[4858]: I0127 20:34:37.547775 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 20:34:38 crc kubenswrapper[4858]: I0127 20:34:38.334844 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.267289 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6x7b7"] Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.271895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.286685 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x7b7"] Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.319668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmm9d\" (UniqueName: \"kubernetes.io/projected/72c982d1-53e2-49e0-88ee-e6807485e9dc-kube-api-access-gmm9d\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.319726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c982d1-53e2-49e0-88ee-e6807485e9dc-catalog-content\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.319887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c982d1-53e2-49e0-88ee-e6807485e9dc-utilities\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.422366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c982d1-53e2-49e0-88ee-e6807485e9dc-catalog-content\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.422530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c982d1-53e2-49e0-88ee-e6807485e9dc-utilities\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.422755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmm9d\" (UniqueName: \"kubernetes.io/projected/72c982d1-53e2-49e0-88ee-e6807485e9dc-kube-api-access-gmm9d\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.422878 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c982d1-53e2-49e0-88ee-e6807485e9dc-catalog-content\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.423281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c982d1-53e2-49e0-88ee-e6807485e9dc-utilities\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.462096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmm9d\" (UniqueName: \"kubernetes.io/projected/72c982d1-53e2-49e0-88ee-e6807485e9dc-kube-api-access-gmm9d\") pod \"certified-operators-6x7b7\" (UID: \"72c982d1-53e2-49e0-88ee-e6807485e9dc\") " pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.610689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.775749 4858 generic.go:334] "Generic (PLEG): container finished" podID="dfd9ae76-5a01-46af-995d-6fa271c1e3b8" containerID="41579751dd60bdefe1da4a6dd31cba7691cb92f74172348f940ee386a4180718" exitCode=0 Jan 27 20:34:45 crc kubenswrapper[4858]: I0127 20:34:45.785268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" event={"ID":"dfd9ae76-5a01-46af-995d-6fa271c1e3b8","Type":"ContainerDied","Data":"41579751dd60bdefe1da4a6dd31cba7691cb92f74172348f940ee386a4180718"} Jan 27 20:34:46 crc kubenswrapper[4858]: I0127 20:34:46.219229 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x7b7"] Jan 27 20:34:46 crc kubenswrapper[4858]: I0127 20:34:46.788657 4858 generic.go:334] "Generic (PLEG): container finished" podID="72c982d1-53e2-49e0-88ee-e6807485e9dc" containerID="a00c25c8325aace957bb20371fdcd8875bc09c3e46a252314dee32d960aeb471" exitCode=0 Jan 27 20:34:46 crc kubenswrapper[4858]: I0127 20:34:46.788724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x7b7" event={"ID":"72c982d1-53e2-49e0-88ee-e6807485e9dc","Type":"ContainerDied","Data":"a00c25c8325aace957bb20371fdcd8875bc09c3e46a252314dee32d960aeb471"} Jan 27 20:34:46 crc kubenswrapper[4858]: I0127 20:34:46.788788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x7b7" event={"ID":"72c982d1-53e2-49e0-88ee-e6807485e9dc","Type":"ContainerStarted","Data":"bb54c0f9d5d00c2ea23a0f4aba2ce77dcd849ca1ecd71912e1e26e7453c7fe44"} Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.407425 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.573653 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-repo-setup-combined-ca-bundle\") pod \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.573796 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc2t4\" (UniqueName: \"kubernetes.io/projected/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-kube-api-access-xc2t4\") pod \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.573975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-inventory\") pod \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.574057 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-ssh-key-openstack-edpm-ipam\") pod \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\" (UID: \"dfd9ae76-5a01-46af-995d-6fa271c1e3b8\") " Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.580540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dfd9ae76-5a01-46af-995d-6fa271c1e3b8" (UID: "dfd9ae76-5a01-46af-995d-6fa271c1e3b8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.582139 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-kube-api-access-xc2t4" (OuterVolumeSpecName: "kube-api-access-xc2t4") pod "dfd9ae76-5a01-46af-995d-6fa271c1e3b8" (UID: "dfd9ae76-5a01-46af-995d-6fa271c1e3b8"). InnerVolumeSpecName "kube-api-access-xc2t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.608721 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dfd9ae76-5a01-46af-995d-6fa271c1e3b8" (UID: "dfd9ae76-5a01-46af-995d-6fa271c1e3b8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.613656 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-inventory" (OuterVolumeSpecName: "inventory") pod "dfd9ae76-5a01-46af-995d-6fa271c1e3b8" (UID: "dfd9ae76-5a01-46af-995d-6fa271c1e3b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.677248 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc2t4\" (UniqueName: \"kubernetes.io/projected/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-kube-api-access-xc2t4\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.677289 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.677300 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.677309 4858 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd9ae76-5a01-46af-995d-6fa271c1e3b8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.805771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" event={"ID":"dfd9ae76-5a01-46af-995d-6fa271c1e3b8","Type":"ContainerDied","Data":"123ed68119cbee65550d2c4bd5d7a0b05c188277646d5f82ce6f99b293a9d89e"} Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.805815 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="123ed68119cbee65550d2c4bd5d7a0b05c188277646d5f82ce6f99b293a9d89e" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.805884 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8454l" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.903900 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h"] Jan 27 20:34:47 crc kubenswrapper[4858]: E0127 20:34:47.904532 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd9ae76-5a01-46af-995d-6fa271c1e3b8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.904566 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd9ae76-5a01-46af-995d-6fa271c1e3b8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.904790 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd9ae76-5a01-46af-995d-6fa271c1e3b8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.905817 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.908541 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.908629 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.908841 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.910089 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.918170 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h"] Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.983186 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.983306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zllsc\" (UniqueName: \"kubernetes.io/projected/dea36689-21b8-4ef7-9ead-35b516cb5f60-kube-api-access-zllsc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:47 crc kubenswrapper[4858]: I0127 20:34:47.983513 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.084845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.084898 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zllsc\" (UniqueName: \"kubernetes.io/projected/dea36689-21b8-4ef7-9ead-35b516cb5f60-kube-api-access-zllsc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.085282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.089968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.098728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.102255 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zllsc\" (UniqueName: \"kubernetes.io/projected/dea36689-21b8-4ef7-9ead-35b516cb5f60-kube-api-access-zllsc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d646h\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.232722 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:48 crc kubenswrapper[4858]: I0127 20:34:48.817741 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h"] Jan 27 20:34:48 crc kubenswrapper[4858]: W0127 20:34:48.819492 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddea36689_21b8_4ef7_9ead_35b516cb5f60.slice/crio-07eed9a5945f53e8695379ebe028e7127ee9139203695fa5ca6abf7c41f0b4e0 WatchSource:0}: Error finding container 07eed9a5945f53e8695379ebe028e7127ee9139203695fa5ca6abf7c41f0b4e0: Status 404 returned error can't find the container with id 07eed9a5945f53e8695379ebe028e7127ee9139203695fa5ca6abf7c41f0b4e0 Jan 27 20:34:49 crc kubenswrapper[4858]: I0127 20:34:49.072211 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:34:49 crc kubenswrapper[4858]: E0127 20:34:49.073038 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:34:49 crc kubenswrapper[4858]: I0127 20:34:49.830079 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" event={"ID":"dea36689-21b8-4ef7-9ead-35b516cb5f60","Type":"ContainerStarted","Data":"07eed9a5945f53e8695379ebe028e7127ee9139203695fa5ca6abf7c41f0b4e0"} Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.071678 4858 scope.go:117] "RemoveContainer" containerID="bcb1d6cdad9834a8ca239bc0e4bd61fa15a8e10dc74226d26466495bc626a052" Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.349476 4858 scope.go:117] "RemoveContainer" containerID="c3355a00a372c4f61092329a7c661fe3bafcea899cb27b421d0fe1905087f115" Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.416522 4858 scope.go:117] "RemoveContainer" containerID="7a46157259e5e9d82e9db91fb5da218a22d65c3c0c118df0058647a983c7151c" Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.478440 4858 scope.go:117] "RemoveContainer" containerID="bebdb9af01ac798d06c48efe567b86b340a5b27ffc343d8d5385ba411cb0a5cc" Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.562922 4858 scope.go:117] "RemoveContainer" containerID="3b3e39cc70a770b37ee336c30cd07551febee27fdb0ad58a94a144a8a68a0f1a" Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.631945 4858 scope.go:117] "RemoveContainer" containerID="22c0361c866286920f89895512ac3ec83272b9a4efc0c82117f0e6d5d8928c5d" Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.875904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x7b7" event={"ID":"72c982d1-53e2-49e0-88ee-e6807485e9dc","Type":"ContainerStarted","Data":"a3265a079d911da53d3b32b94db00f10123f6dccf35a9f4d205334447330a766"} Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.877672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" event={"ID":"dea36689-21b8-4ef7-9ead-35b516cb5f60","Type":"ContainerStarted","Data":"9cac1d624af9ace2fc205d002d0ed92ae142057af3ae7542c610cac3e58d7f9f"} Jan 27 20:34:52 crc kubenswrapper[4858]: I0127 20:34:52.967713 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" podStartSLOduration=5.575990193 podStartE2EDuration="5.967686838s" podCreationTimestamp="2026-01-27 20:34:47 +0000 UTC" firstStartedPulling="2026-01-27 20:34:48.82512246 +0000 UTC m=+1633.532938176" lastFinishedPulling="2026-01-27 20:34:49.216819115 +0000 UTC m=+1633.924634821" observedRunningTime="2026-01-27 20:34:52.956826852 +0000 UTC m=+1637.664642568" watchObservedRunningTime="2026-01-27 20:34:52.967686838 +0000 UTC m=+1637.675502544" Jan 27 20:34:53 crc kubenswrapper[4858]: I0127 20:34:53.889140 4858 generic.go:334] "Generic (PLEG): container finished" podID="72c982d1-53e2-49e0-88ee-e6807485e9dc" containerID="a3265a079d911da53d3b32b94db00f10123f6dccf35a9f4d205334447330a766" exitCode=0 Jan 27 20:34:53 crc kubenswrapper[4858]: I0127 20:34:53.889236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x7b7" event={"ID":"72c982d1-53e2-49e0-88ee-e6807485e9dc","Type":"ContainerDied","Data":"a3265a079d911da53d3b32b94db00f10123f6dccf35a9f4d205334447330a766"} Jan 27 20:34:54 crc kubenswrapper[4858]: I0127 20:34:54.906862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x7b7" event={"ID":"72c982d1-53e2-49e0-88ee-e6807485e9dc","Type":"ContainerStarted","Data":"ea757e52691febe9f8a70a38ab3041fc63938a3d6f5462697725755bae138418"} Jan 27 20:34:54 crc kubenswrapper[4858]: I0127 20:34:54.926745 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6x7b7" podStartSLOduration=2.134065023 podStartE2EDuration="9.926728959s" podCreationTimestamp="2026-01-27 20:34:45 +0000 UTC" firstStartedPulling="2026-01-27 20:34:46.790498089 +0000 UTC m=+1631.498313795" lastFinishedPulling="2026-01-27 20:34:54.583162025 +0000 UTC m=+1639.290977731" observedRunningTime="2026-01-27 20:34:54.924792782 +0000 UTC m=+1639.632608488" watchObservedRunningTime="2026-01-27 20:34:54.926728959 +0000 UTC m=+1639.634544665" Jan 27 20:34:55 crc kubenswrapper[4858]: I0127 20:34:55.611270 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:55 crc kubenswrapper[4858]: I0127 20:34:55.611339 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:34:55 crc kubenswrapper[4858]: I0127 20:34:55.946595 4858 generic.go:334] "Generic (PLEG): container finished" podID="dea36689-21b8-4ef7-9ead-35b516cb5f60" containerID="9cac1d624af9ace2fc205d002d0ed92ae142057af3ae7542c610cac3e58d7f9f" exitCode=0 Jan 27 20:34:55 crc kubenswrapper[4858]: I0127 20:34:55.946713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" event={"ID":"dea36689-21b8-4ef7-9ead-35b516cb5f60","Type":"ContainerDied","Data":"9cac1d624af9ace2fc205d002d0ed92ae142057af3ae7542c610cac3e58d7f9f"} Jan 27 20:34:56 crc kubenswrapper[4858]: I0127 20:34:56.669422 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6x7b7" podUID="72c982d1-53e2-49e0-88ee-e6807485e9dc" containerName="registry-server" probeResult="failure" output=< Jan 27 20:34:56 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:34:56 crc kubenswrapper[4858]: > Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.466481 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.623714 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-ssh-key-openstack-edpm-ipam\") pod \"dea36689-21b8-4ef7-9ead-35b516cb5f60\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.623806 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-inventory\") pod \"dea36689-21b8-4ef7-9ead-35b516cb5f60\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.624034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zllsc\" (UniqueName: \"kubernetes.io/projected/dea36689-21b8-4ef7-9ead-35b516cb5f60-kube-api-access-zllsc\") pod \"dea36689-21b8-4ef7-9ead-35b516cb5f60\" (UID: \"dea36689-21b8-4ef7-9ead-35b516cb5f60\") " Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.632946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea36689-21b8-4ef7-9ead-35b516cb5f60-kube-api-access-zllsc" (OuterVolumeSpecName: "kube-api-access-zllsc") pod "dea36689-21b8-4ef7-9ead-35b516cb5f60" (UID: "dea36689-21b8-4ef7-9ead-35b516cb5f60"). InnerVolumeSpecName "kube-api-access-zllsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.662024 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dea36689-21b8-4ef7-9ead-35b516cb5f60" (UID: "dea36689-21b8-4ef7-9ead-35b516cb5f60"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.662328 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-inventory" (OuterVolumeSpecName: "inventory") pod "dea36689-21b8-4ef7-9ead-35b516cb5f60" (UID: "dea36689-21b8-4ef7-9ead-35b516cb5f60"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.727075 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.727337 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dea36689-21b8-4ef7-9ead-35b516cb5f60-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.727395 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zllsc\" (UniqueName: \"kubernetes.io/projected/dea36689-21b8-4ef7-9ead-35b516cb5f60-kube-api-access-zllsc\") on node \"crc\" DevicePath \"\"" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.974018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" event={"ID":"dea36689-21b8-4ef7-9ead-35b516cb5f60","Type":"ContainerDied","Data":"07eed9a5945f53e8695379ebe028e7127ee9139203695fa5ca6abf7c41f0b4e0"} Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.974396 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07eed9a5945f53e8695379ebe028e7127ee9139203695fa5ca6abf7c41f0b4e0" Jan 27 20:34:57 crc kubenswrapper[4858]: I0127 20:34:57.974431 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d646h" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.064512 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj"] Jan 27 20:34:58 crc kubenswrapper[4858]: E0127 20:34:58.065153 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea36689-21b8-4ef7-9ead-35b516cb5f60" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.065179 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea36689-21b8-4ef7-9ead-35b516cb5f60" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.065469 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea36689-21b8-4ef7-9ead-35b516cb5f60" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.066466 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.070949 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.071192 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.071927 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.074228 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.115791 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj"] Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.239578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.239748 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.239876 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26bz8\" (UniqueName: \"kubernetes.io/projected/20ffe28d-a9df-4416-85b7-c501d7555431-kube-api-access-26bz8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.239934 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.341883 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.342213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.342382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26bz8\" (UniqueName: \"kubernetes.io/projected/20ffe28d-a9df-4416-85b7-c501d7555431-kube-api-access-26bz8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.342490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.347344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.347528 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.347570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.369861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26bz8\" (UniqueName: \"kubernetes.io/projected/20ffe28d-a9df-4416-85b7-c501d7555431-kube-api-access-26bz8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:58 crc kubenswrapper[4858]: I0127 20:34:58.384863 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:34:59 crc kubenswrapper[4858]: I0127 20:34:59.142307 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj"] Jan 27 20:35:00 crc kubenswrapper[4858]: I0127 20:35:00.002587 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" event={"ID":"20ffe28d-a9df-4416-85b7-c501d7555431","Type":"ContainerStarted","Data":"3078668ca093f4c5b8c5934f6050b28267840e4a2e3dae2bf6e116fd5067cd36"} Jan 27 20:35:01 crc kubenswrapper[4858]: I0127 20:35:01.038611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" event={"ID":"20ffe28d-a9df-4416-85b7-c501d7555431","Type":"ContainerStarted","Data":"5779e317d1ec86b3687ccc4dc2b019a69772cb06cc8e4a6f43627e843acd4771"} Jan 27 20:35:01 crc kubenswrapper[4858]: I0127 20:35:01.068919 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" podStartSLOduration=2.5467844570000002 podStartE2EDuration="3.068898919s" podCreationTimestamp="2026-01-27 20:34:58 +0000 UTC" firstStartedPulling="2026-01-27 20:34:59.157778103 +0000 UTC m=+1643.865593809" lastFinishedPulling="2026-01-27 20:34:59.679892565 +0000 UTC m=+1644.387708271" observedRunningTime="2026-01-27 20:35:01.062012159 +0000 UTC m=+1645.769827865" watchObservedRunningTime="2026-01-27 20:35:01.068898919 +0000 UTC m=+1645.776714625" Jan 27 20:35:04 crc kubenswrapper[4858]: I0127 20:35:04.079380 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:35:04 crc kubenswrapper[4858]: E0127 20:35:04.080175 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:35:05 crc kubenswrapper[4858]: I0127 20:35:05.675561 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:35:05 crc kubenswrapper[4858]: I0127 20:35:05.739342 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6x7b7" Jan 27 20:35:05 crc kubenswrapper[4858]: I0127 20:35:05.816032 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x7b7"] Jan 27 20:35:05 crc kubenswrapper[4858]: I0127 20:35:05.949874 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrqvn"] Jan 27 20:35:05 crc kubenswrapper[4858]: I0127 20:35:05.950210 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jrqvn" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="registry-server" containerID="cri-o://e289bd719d90e0709a58b8df53e7b69d10b4e12a91d449ee80c419a3b5a63aa1" gracePeriod=2 Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.116740 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerID="e289bd719d90e0709a58b8df53e7b69d10b4e12a91d449ee80c419a3b5a63aa1" exitCode=0 Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.116826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqvn" event={"ID":"9ce09890-416b-4b69-87f8-5a695f3c2ce8","Type":"ContainerDied","Data":"e289bd719d90e0709a58b8df53e7b69d10b4e12a91d449ee80c419a3b5a63aa1"} Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.514516 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.555182 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-utilities\") pod \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.555320 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-catalog-content\") pod \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.555661 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfll4\" (UniqueName: \"kubernetes.io/projected/9ce09890-416b-4b69-87f8-5a695f3c2ce8-kube-api-access-zfll4\") pod \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\" (UID: \"9ce09890-416b-4b69-87f8-5a695f3c2ce8\") " Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.560980 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-utilities" (OuterVolumeSpecName: "utilities") pod "9ce09890-416b-4b69-87f8-5a695f3c2ce8" (UID: "9ce09890-416b-4b69-87f8-5a695f3c2ce8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.578166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce09890-416b-4b69-87f8-5a695f3c2ce8-kube-api-access-zfll4" (OuterVolumeSpecName: "kube-api-access-zfll4") pod "9ce09890-416b-4b69-87f8-5a695f3c2ce8" (UID: "9ce09890-416b-4b69-87f8-5a695f3c2ce8"). InnerVolumeSpecName "kube-api-access-zfll4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.659034 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfll4\" (UniqueName: \"kubernetes.io/projected/9ce09890-416b-4b69-87f8-5a695f3c2ce8-kube-api-access-zfll4\") on node \"crc\" DevicePath \"\"" Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.659701 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.671790 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ce09890-416b-4b69-87f8-5a695f3c2ce8" (UID: "9ce09890-416b-4b69-87f8-5a695f3c2ce8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:35:06 crc kubenswrapper[4858]: I0127 20:35:06.761926 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce09890-416b-4b69-87f8-5a695f3c2ce8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:35:07 crc kubenswrapper[4858]: I0127 20:35:07.131920 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrqvn" event={"ID":"9ce09890-416b-4b69-87f8-5a695f3c2ce8","Type":"ContainerDied","Data":"ec44b81f0184e1c0b86a89997bbbd3e920462b695cbfae99dab2b830e1a519d9"} Jan 27 20:35:07 crc kubenswrapper[4858]: I0127 20:35:07.132045 4858 scope.go:117] "RemoveContainer" containerID="e289bd719d90e0709a58b8df53e7b69d10b4e12a91d449ee80c419a3b5a63aa1" Jan 27 20:35:07 crc kubenswrapper[4858]: I0127 20:35:07.131975 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrqvn" Jan 27 20:35:07 crc kubenswrapper[4858]: I0127 20:35:07.178687 4858 scope.go:117] "RemoveContainer" containerID="ac41c51f7eb2afdf600c30cdcc00b2a2f5e47988dbfa489b524547341607806c" Jan 27 20:35:07 crc kubenswrapper[4858]: I0127 20:35:07.182308 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrqvn"] Jan 27 20:35:07 crc kubenswrapper[4858]: I0127 20:35:07.194729 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jrqvn"] Jan 27 20:35:07 crc kubenswrapper[4858]: I0127 20:35:07.207993 4858 scope.go:117] "RemoveContainer" containerID="9347acaae79258675cbb0e261775a83c1830f930839d84512b4f1da6935fde29" Jan 27 20:35:08 crc kubenswrapper[4858]: I0127 20:35:08.084829 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" path="/var/lib/kubelet/pods/9ce09890-416b-4b69-87f8-5a695f3c2ce8/volumes" Jan 27 20:35:17 crc kubenswrapper[4858]: I0127 20:35:17.072091 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:35:17 crc kubenswrapper[4858]: E0127 20:35:17.073120 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:35:32 crc kubenswrapper[4858]: I0127 20:35:32.072031 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:35:32 crc kubenswrapper[4858]: E0127 20:35:32.073416 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:35:46 crc kubenswrapper[4858]: I0127 20:35:46.077599 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:35:46 crc kubenswrapper[4858]: E0127 20:35:46.078432 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:35:52 crc kubenswrapper[4858]: I0127 20:35:52.806355 4858 scope.go:117] "RemoveContainer" containerID="feaba0008f23f1e7aee49e3e0f41aa88c51ba0b941c77f6182e459112d8408b9" Jan 27 20:35:52 crc kubenswrapper[4858]: I0127 20:35:52.842175 4858 scope.go:117] "RemoveContainer" containerID="9ece8c1246d94eb9b8bd55a0da4ce8ae246f164734bfe0a9de3c94ef0bd40bb6" Jan 27 20:35:52 crc kubenswrapper[4858]: I0127 20:35:52.896194 4858 scope.go:117] "RemoveContainer" containerID="c6d284b1a3bea0cf002332c36984d2ec019deb16b0466ac5b771dc9aff758b76" Jan 27 20:35:52 crc kubenswrapper[4858]: I0127 20:35:52.943067 4858 scope.go:117] "RemoveContainer" containerID="aa84b43dd39168f5465057da4ffc0cf125da3e976c1b56bc5fb7f19c3ad83c36" Jan 27 20:35:52 crc kubenswrapper[4858]: I0127 20:35:52.985162 4858 scope.go:117] "RemoveContainer" containerID="9fc44f526653b3ece0070dd646f1b4f781b5a067fc44159f3bc166c573c1a1bb" Jan 27 20:35:53 crc kubenswrapper[4858]: I0127 20:35:53.021200 4858 scope.go:117] "RemoveContainer" containerID="f36e73984958cfc9d6db231ecc55a91c7addac4daac8dcd6c320aa7606bd832b" Jan 27 20:35:53 crc kubenswrapper[4858]: I0127 20:35:53.066150 4858 scope.go:117] "RemoveContainer" containerID="4a2d3c0b69d2803c548a955731080f645cb9ddf696bba50c21cd6fa56a3d4f68" Jan 27 20:35:53 crc kubenswrapper[4858]: I0127 20:35:53.087853 4858 scope.go:117] "RemoveContainer" containerID="f94d3937a45c7d6a9e37a94bc78c9b78d4da0996ce1944d482f737900795b362" Jan 27 20:36:00 crc kubenswrapper[4858]: I0127 20:36:00.071228 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:36:00 crc kubenswrapper[4858]: E0127 20:36:00.072291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:36:13 crc kubenswrapper[4858]: I0127 20:36:13.071038 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:36:13 crc kubenswrapper[4858]: E0127 20:36:13.071938 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:36:24 crc kubenswrapper[4858]: I0127 20:36:24.071099 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:36:24 crc kubenswrapper[4858]: E0127 20:36:24.072164 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:36:38 crc kubenswrapper[4858]: I0127 20:36:38.073509 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:36:38 crc kubenswrapper[4858]: E0127 20:36:38.075193 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:36:51 crc kubenswrapper[4858]: I0127 20:36:51.071430 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:36:51 crc kubenswrapper[4858]: E0127 20:36:51.072953 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:36:53 crc kubenswrapper[4858]: I0127 20:36:53.209931 4858 scope.go:117] "RemoveContainer" containerID="b4e13937d9f6123c3847e871437efbd5c11818b2ed3824299b82634dd6f9b0cb" Jan 27 20:36:53 crc kubenswrapper[4858]: I0127 20:36:53.243045 4858 scope.go:117] "RemoveContainer" containerID="2cf5f3ac8f311926f4caecec5cbe3beafca1edba62709a677207d7b3207c878a" Jan 27 20:37:06 crc kubenswrapper[4858]: I0127 20:37:06.082224 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:37:06 crc kubenswrapper[4858]: E0127 20:37:06.083236 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:37:17 crc kubenswrapper[4858]: I0127 20:37:17.071499 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:37:17 crc kubenswrapper[4858]: E0127 20:37:17.072185 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:37:29 crc kubenswrapper[4858]: I0127 20:37:29.071406 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:37:29 crc kubenswrapper[4858]: E0127 20:37:29.072854 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:37:40 crc kubenswrapper[4858]: I0127 20:37:40.071709 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:37:40 crc kubenswrapper[4858]: E0127 20:37:40.072725 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:37:55 crc kubenswrapper[4858]: I0127 20:37:55.072490 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:37:55 crc kubenswrapper[4858]: E0127 20:37:55.073424 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:38:07 crc kubenswrapper[4858]: I0127 20:38:07.072121 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:38:07 crc kubenswrapper[4858]: E0127 20:38:07.073354 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:38:12 crc kubenswrapper[4858]: I0127 20:38:12.049827 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8625f"] Jan 27 20:38:12 crc kubenswrapper[4858]: I0127 20:38:12.066034 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-58c8-account-create-update-f2bwj"] Jan 27 20:38:12 crc kubenswrapper[4858]: I0127 20:38:12.092145 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-58c8-account-create-update-f2bwj"] Jan 27 20:38:12 crc kubenswrapper[4858]: I0127 20:38:12.092245 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8625f"] Jan 27 20:38:13 crc kubenswrapper[4858]: I0127 20:38:13.033781 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3fe9-account-create-update-mvfhw"] Jan 27 20:38:13 crc kubenswrapper[4858]: I0127 20:38:13.044270 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3fe9-account-create-update-mvfhw"] Jan 27 20:38:14 crc kubenswrapper[4858]: I0127 20:38:14.055744 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-9080-account-create-update-j5tv4"] Jan 27 20:38:14 crc kubenswrapper[4858]: I0127 20:38:14.088965 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6049d236-ab47-40dd-845e-af928985d66b" path="/var/lib/kubelet/pods/6049d236-ab47-40dd-845e-af928985d66b/volumes" Jan 27 20:38:14 crc kubenswrapper[4858]: I0127 20:38:14.090473 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbaa1db-74f0-45f8-9f44-e27ebff3e89c" path="/var/lib/kubelet/pods/9dbaa1db-74f0-45f8-9f44-e27ebff3e89c/volumes" Jan 27 20:38:14 crc kubenswrapper[4858]: I0127 20:38:14.092402 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed125e14-c1dd-4edc-bf84-2e95b94afc30" path="/var/lib/kubelet/pods/ed125e14-c1dd-4edc-bf84-2e95b94afc30/volumes" Jan 27 20:38:14 crc kubenswrapper[4858]: I0127 20:38:14.093584 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-9080-account-create-update-j5tv4"] Jan 27 20:38:15 crc kubenswrapper[4858]: I0127 20:38:15.042094 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-4bp6j"] Jan 27 20:38:15 crc kubenswrapper[4858]: I0127 20:38:15.055877 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-4bp6j"] Jan 27 20:38:16 crc kubenswrapper[4858]: I0127 20:38:16.097947 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec23204-a373-4fab-80be-43c45596f7e0" path="/var/lib/kubelet/pods/1ec23204-a373-4fab-80be-43c45596f7e0/volumes" Jan 27 20:38:16 crc kubenswrapper[4858]: I0127 20:38:16.099345 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6772e04a-3e3e-427a-8e84-8979c1fe31af" path="/var/lib/kubelet/pods/6772e04a-3e3e-427a-8e84-8979c1fe31af/volumes" Jan 27 20:38:18 crc kubenswrapper[4858]: I0127 20:38:18.072213 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:38:18 crc kubenswrapper[4858]: E0127 20:38:18.072966 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:38:22 crc kubenswrapper[4858]: I0127 20:38:22.035852 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-h782w"] Jan 27 20:38:22 crc kubenswrapper[4858]: I0127 20:38:22.046532 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-h782w"] Jan 27 20:38:22 crc kubenswrapper[4858]: I0127 20:38:22.090569 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c781eb-de63-45c1-b5b2-0496fe6f2d34" path="/var/lib/kubelet/pods/c7c781eb-de63-45c1-b5b2-0496fe6f2d34/volumes" Jan 27 20:38:31 crc kubenswrapper[4858]: I0127 20:38:31.049163 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gxl97"] Jan 27 20:38:31 crc kubenswrapper[4858]: I0127 20:38:31.066497 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gxl97"] Jan 27 20:38:32 crc kubenswrapper[4858]: I0127 20:38:32.072425 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:38:32 crc kubenswrapper[4858]: I0127 20:38:32.086437 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102cbac7-d601-47cf-b5d5-7279a3453669" path="/var/lib/kubelet/pods/102cbac7-d601-47cf-b5d5-7279a3453669/volumes" Jan 27 20:38:32 crc kubenswrapper[4858]: I0127 20:38:32.542229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"336e4dbda5f2330cb97a3401d43a535416bd6411da7f0e5d5731c4398198a98c"} Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.053143 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cc4d-account-create-update-d2jtl"] Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.082339 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hh25x"] Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.087777 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-364b-account-create-update-csrfb"] Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.096402 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ch6lv"] Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.105707 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cc4d-account-create-update-d2jtl"] Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.114944 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ch6lv"] Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.124215 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hh25x"] Jan 27 20:38:50 crc kubenswrapper[4858]: I0127 20:38:50.133455 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-364b-account-create-update-csrfb"] Jan 27 20:38:52 crc kubenswrapper[4858]: I0127 20:38:52.085764 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e7fc25-3533-452e-b1cd-1bb63cf92b60" path="/var/lib/kubelet/pods/06e7fc25-3533-452e-b1cd-1bb63cf92b60/volumes" Jan 27 20:38:52 crc kubenswrapper[4858]: I0127 20:38:52.087187 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab95708-4d18-4482-be83-b9f184b8a8f0" path="/var/lib/kubelet/pods/3ab95708-4d18-4482-be83-b9f184b8a8f0/volumes" Jan 27 20:38:52 crc kubenswrapper[4858]: I0127 20:38:52.087886 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc3a612-b94e-44d8-a37f-7788316b9156" path="/var/lib/kubelet/pods/3dc3a612-b94e-44d8-a37f-7788316b9156/volumes" Jan 27 20:38:52 crc kubenswrapper[4858]: I0127 20:38:52.088612 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33fccf1-31e1-48da-9f19-598e521a8357" path="/var/lib/kubelet/pods/d33fccf1-31e1-48da-9f19-598e521a8357/volumes" Jan 27 20:38:52 crc kubenswrapper[4858]: I0127 20:38:52.792649 4858 generic.go:334] "Generic (PLEG): container finished" podID="20ffe28d-a9df-4416-85b7-c501d7555431" containerID="5779e317d1ec86b3687ccc4dc2b019a69772cb06cc8e4a6f43627e843acd4771" exitCode=0 Jan 27 20:38:52 crc kubenswrapper[4858]: I0127 20:38:52.792703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" event={"ID":"20ffe28d-a9df-4416-85b7-c501d7555431","Type":"ContainerDied","Data":"5779e317d1ec86b3687ccc4dc2b019a69772cb06cc8e4a6f43627e843acd4771"} Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.342191 4858 scope.go:117] "RemoveContainer" containerID="fcd05f154137a5cd0cdef610c44dc0c175ad659c408b90d1885d43906b2bd0c3" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.370344 4858 scope.go:117] "RemoveContainer" containerID="caf128d862a6e275cd3e353b499ea58b115a8a5f6c85813ea7d4c1f407e1a290" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.418627 4858 scope.go:117] "RemoveContainer" containerID="8b1f356bfce80cd8a433abad4d931289ef5df2cd7837018a90d62ec7b4fe60e2" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.487601 4858 scope.go:117] "RemoveContainer" containerID="4d70e56c3d3a4cdcba438c5451d8e7c7f7192bb4f10bcfa81fec4e27c20095f0" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.528083 4858 scope.go:117] "RemoveContainer" containerID="2af5bd9f6018c3bdefdcd11a1fc5f5292aa9740fc308ac54e82ef39926abff28" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.617639 4858 scope.go:117] "RemoveContainer" containerID="4fc16d5d7be264ec15cff3b5a3064abb362e6b25e30ac9cc704b3e74b52ad512" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.708537 4858 scope.go:117] "RemoveContainer" containerID="040a3230385826025b96254b816b02b7948c029a8e16e3ad5475a5c352e86d94" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.732875 4858 scope.go:117] "RemoveContainer" containerID="e2f6744f033e6446a683b5bedbca7e1674c0929ebb4dbd684eed82f02443ade4" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.787037 4858 scope.go:117] "RemoveContainer" containerID="95ebc05229889acd0760c4cccbee16f1f7b7e72f5e52f81857016afb0b2d1b7c" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.832951 4858 scope.go:117] "RemoveContainer" containerID="e8f9b65ca82a4810c2f4128d4a5385a9ab7c32287bf42bcd935f06eeab4aa458" Jan 27 20:38:53 crc kubenswrapper[4858]: I0127 20:38:53.870603 4858 scope.go:117] "RemoveContainer" containerID="ae03d626878e812a7b64adf5ef65f4c86026f3ddcfcc8475153ce6b1ead9d97e" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.040635 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x7vsq"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.054499 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gjm29"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.066050 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x7vsq"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.104959 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42982fed-6c85-4967-831f-3d1f7715fa5f" path="/var/lib/kubelet/pods/42982fed-6c85-4967-831f-3d1f7715fa5f/volumes" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.105673 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-374b-account-create-update-nmdgk"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.105701 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2a45-account-create-update-cv72r"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.105714 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-374b-account-create-update-nmdgk"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.110424 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gjm29"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.118367 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2a45-account-create-update-cv72r"] Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.239076 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.288854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-inventory\") pod \"20ffe28d-a9df-4416-85b7-c501d7555431\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.289698 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-bootstrap-combined-ca-bundle\") pod \"20ffe28d-a9df-4416-85b7-c501d7555431\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.289941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-ssh-key-openstack-edpm-ipam\") pod \"20ffe28d-a9df-4416-85b7-c501d7555431\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.289996 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26bz8\" (UniqueName: \"kubernetes.io/projected/20ffe28d-a9df-4416-85b7-c501d7555431-kube-api-access-26bz8\") pod \"20ffe28d-a9df-4416-85b7-c501d7555431\" (UID: \"20ffe28d-a9df-4416-85b7-c501d7555431\") " Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.294893 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "20ffe28d-a9df-4416-85b7-c501d7555431" (UID: "20ffe28d-a9df-4416-85b7-c501d7555431"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.295467 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ffe28d-a9df-4416-85b7-c501d7555431-kube-api-access-26bz8" (OuterVolumeSpecName: "kube-api-access-26bz8") pod "20ffe28d-a9df-4416-85b7-c501d7555431" (UID: "20ffe28d-a9df-4416-85b7-c501d7555431"). InnerVolumeSpecName "kube-api-access-26bz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.321688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-inventory" (OuterVolumeSpecName: "inventory") pod "20ffe28d-a9df-4416-85b7-c501d7555431" (UID: "20ffe28d-a9df-4416-85b7-c501d7555431"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.322095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "20ffe28d-a9df-4416-85b7-c501d7555431" (UID: "20ffe28d-a9df-4416-85b7-c501d7555431"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.393979 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26bz8\" (UniqueName: \"kubernetes.io/projected/20ffe28d-a9df-4416-85b7-c501d7555431-kube-api-access-26bz8\") on node \"crc\" DevicePath \"\"" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.394034 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.394048 4858 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.394063 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/20ffe28d-a9df-4416-85b7-c501d7555431-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.869988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" event={"ID":"20ffe28d-a9df-4416-85b7-c501d7555431","Type":"ContainerDied","Data":"3078668ca093f4c5b8c5934f6050b28267840e4a2e3dae2bf6e116fd5067cd36"} Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.870374 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3078668ca093f4c5b8c5934f6050b28267840e4a2e3dae2bf6e116fd5067cd36" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.870459 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.938037 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt"] Jan 27 20:38:54 crc kubenswrapper[4858]: E0127 20:38:54.938633 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ffe28d-a9df-4416-85b7-c501d7555431" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.938666 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ffe28d-a9df-4416-85b7-c501d7555431" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 20:38:54 crc kubenswrapper[4858]: E0127 20:38:54.938702 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="extract-utilities" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.938711 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="extract-utilities" Jan 27 20:38:54 crc kubenswrapper[4858]: E0127 20:38:54.938734 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="extract-content" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.938742 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="extract-content" Jan 27 20:38:54 crc kubenswrapper[4858]: E0127 20:38:54.938760 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="registry-server" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.938768 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="registry-server" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.939008 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ffe28d-a9df-4416-85b7-c501d7555431" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.939026 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce09890-416b-4b69-87f8-5a695f3c2ce8" containerName="registry-server" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.939952 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.944490 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.944838 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.944926 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.945200 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:38:54 crc kubenswrapper[4858]: I0127 20:38:54.965012 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt"] Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.007243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.007802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.008044 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m49b\" (UniqueName: \"kubernetes.io/projected/d59ffd9a-001c-400a-b79b-4617489956ed-kube-api-access-7m49b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.110511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.110714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m49b\" (UniqueName: \"kubernetes.io/projected/d59ffd9a-001c-400a-b79b-4617489956ed-kube-api-access-7m49b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.110762 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.116244 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.129459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.131259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m49b\" (UniqueName: \"kubernetes.io/projected/d59ffd9a-001c-400a-b79b-4617489956ed-kube-api-access-7m49b\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.269887 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.807960 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt"] Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.809087 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:38:55 crc kubenswrapper[4858]: I0127 20:38:55.882601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" event={"ID":"d59ffd9a-001c-400a-b79b-4617489956ed","Type":"ContainerStarted","Data":"db635799032754e074c9de369e68c233e93cb25a85bc34d976e40ca1c68c2d40"} Jan 27 20:38:56 crc kubenswrapper[4858]: I0127 20:38:56.091298 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3" path="/var/lib/kubelet/pods/1d7a7cae-d1ec-4532-95ac-9a6d35dda6b3/volumes" Jan 27 20:38:56 crc kubenswrapper[4858]: I0127 20:38:56.092510 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2596f4b-7e20-4979-be17-e263bc5949c0" path="/var/lib/kubelet/pods/b2596f4b-7e20-4979-be17-e263bc5949c0/volumes" Jan 27 20:38:56 crc kubenswrapper[4858]: I0127 20:38:56.093250 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a9d6ee-6747-442f-a1d8-222920bef47e" path="/var/lib/kubelet/pods/f9a9d6ee-6747-442f-a1d8-222920bef47e/volumes" Jan 27 20:38:57 crc kubenswrapper[4858]: I0127 20:38:57.906854 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" event={"ID":"d59ffd9a-001c-400a-b79b-4617489956ed","Type":"ContainerStarted","Data":"4302e8c8e5910090099f7027c859b10c1b7381a7b11e61dd4b261ede33f706b3"} Jan 27 20:38:57 crc kubenswrapper[4858]: I0127 20:38:57.938842 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" podStartSLOduration=2.956964439 podStartE2EDuration="3.938811137s" podCreationTimestamp="2026-01-27 20:38:54 +0000 UTC" firstStartedPulling="2026-01-27 20:38:55.808817624 +0000 UTC m=+1880.516633330" lastFinishedPulling="2026-01-27 20:38:56.790664322 +0000 UTC m=+1881.498480028" observedRunningTime="2026-01-27 20:38:57.927608302 +0000 UTC m=+1882.635424008" watchObservedRunningTime="2026-01-27 20:38:57.938811137 +0000 UTC m=+1882.646626843" Jan 27 20:39:12 crc kubenswrapper[4858]: I0127 20:39:12.057934 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-nwnrz"] Jan 27 20:39:12 crc kubenswrapper[4858]: I0127 20:39:12.084242 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-nwnrz"] Jan 27 20:39:12 crc kubenswrapper[4858]: I0127 20:39:12.085978 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5fzmw"] Jan 27 20:39:12 crc kubenswrapper[4858]: I0127 20:39:12.096217 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5fzmw"] Jan 27 20:39:14 crc kubenswrapper[4858]: I0127 20:39:14.092534 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3e3875-21c2-42e2-ba9b-ed981baab427" path="/var/lib/kubelet/pods/0d3e3875-21c2-42e2-ba9b-ed981baab427/volumes" Jan 27 20:39:14 crc kubenswrapper[4858]: I0127 20:39:14.096120 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e49e861-a431-4a8f-8864-9672d699d9a0" path="/var/lib/kubelet/pods/7e49e861-a431-4a8f-8864-9672d699d9a0/volumes" Jan 27 20:39:43 crc kubenswrapper[4858]: I0127 20:39:43.758792 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-574fc98977-sp7zp" podUID="57e04641-598d-459b-9996-0ae4182ae4fb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 20:39:44 crc kubenswrapper[4858]: I0127 20:39:44.050498 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kr2j4"] Jan 27 20:39:44 crc kubenswrapper[4858]: I0127 20:39:44.102280 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kr2j4"] Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.065013 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h8tkk"] Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.069023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.079512 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8tkk"] Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.262503 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-catalog-content\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.262595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-utilities\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.262622 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplbx\" (UniqueName: \"kubernetes.io/projected/7acc1224-6517-4394-b1c7-7da935017a74-kube-api-access-gplbx\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.365486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-catalog-content\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.365583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-utilities\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.365617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplbx\" (UniqueName: \"kubernetes.io/projected/7acc1224-6517-4394-b1c7-7da935017a74-kube-api-access-gplbx\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.366079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-catalog-content\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.366335 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-utilities\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.386941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplbx\" (UniqueName: \"kubernetes.io/projected/7acc1224-6517-4394-b1c7-7da935017a74-kube-api-access-gplbx\") pod \"redhat-marketplace-h8tkk\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.403143 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:45 crc kubenswrapper[4858]: I0127 20:39:45.961433 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8tkk"] Jan 27 20:39:46 crc kubenswrapper[4858]: I0127 20:39:46.084026 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04ce574-5470-43ae-8207-fb01bd98805f" path="/var/lib/kubelet/pods/e04ce574-5470-43ae-8207-fb01bd98805f/volumes" Jan 27 20:39:46 crc kubenswrapper[4858]: I0127 20:39:46.463347 4858 generic.go:334] "Generic (PLEG): container finished" podID="7acc1224-6517-4394-b1c7-7da935017a74" containerID="a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8" exitCode=0 Jan 27 20:39:46 crc kubenswrapper[4858]: I0127 20:39:46.463455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8tkk" event={"ID":"7acc1224-6517-4394-b1c7-7da935017a74","Type":"ContainerDied","Data":"a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8"} Jan 27 20:39:46 crc kubenswrapper[4858]: I0127 20:39:46.463771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8tkk" event={"ID":"7acc1224-6517-4394-b1c7-7da935017a74","Type":"ContainerStarted","Data":"970f5d9a7245cb9ef1417756f474939057ecccacce6390b8ed27f10b97f6675b"} Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.294944 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w9fh8"] Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.325752 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.329398 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9fh8"] Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.477745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8tkk" event={"ID":"7acc1224-6517-4394-b1c7-7da935017a74","Type":"ContainerStarted","Data":"74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44"} Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.516431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zznbt\" (UniqueName: \"kubernetes.io/projected/9ed492f2-4fda-47a1-91a2-3586df3dd04b-kube-api-access-zznbt\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.516484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-utilities\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.516595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-catalog-content\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.618156 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-utilities\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.618274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-catalog-content\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.618431 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zznbt\" (UniqueName: \"kubernetes.io/projected/9ed492f2-4fda-47a1-91a2-3586df3dd04b-kube-api-access-zznbt\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.618724 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-utilities\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.619030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-catalog-content\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.651844 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zznbt\" (UniqueName: \"kubernetes.io/projected/9ed492f2-4fda-47a1-91a2-3586df3dd04b-kube-api-access-zznbt\") pod \"community-operators-w9fh8\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:47 crc kubenswrapper[4858]: I0127 20:39:47.683218 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:48 crc kubenswrapper[4858]: I0127 20:39:48.329371 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9fh8"] Jan 27 20:39:48 crc kubenswrapper[4858]: W0127 20:39:48.329819 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ed492f2_4fda_47a1_91a2_3586df3dd04b.slice/crio-6c9aebf5d7fd7d10271798818ebeb8b0eb4decf76ee50a82e7da6981048e154a WatchSource:0}: Error finding container 6c9aebf5d7fd7d10271798818ebeb8b0eb4decf76ee50a82e7da6981048e154a: Status 404 returned error can't find the container with id 6c9aebf5d7fd7d10271798818ebeb8b0eb4decf76ee50a82e7da6981048e154a Jan 27 20:39:48 crc kubenswrapper[4858]: I0127 20:39:48.489696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9fh8" event={"ID":"9ed492f2-4fda-47a1-91a2-3586df3dd04b","Type":"ContainerStarted","Data":"6c9aebf5d7fd7d10271798818ebeb8b0eb4decf76ee50a82e7da6981048e154a"} Jan 27 20:39:48 crc kubenswrapper[4858]: I0127 20:39:48.492353 4858 generic.go:334] "Generic (PLEG): container finished" podID="7acc1224-6517-4394-b1c7-7da935017a74" containerID="74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44" exitCode=0 Jan 27 20:39:48 crc kubenswrapper[4858]: I0127 20:39:48.492459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8tkk" event={"ID":"7acc1224-6517-4394-b1c7-7da935017a74","Type":"ContainerDied","Data":"74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44"} Jan 27 20:39:49 crc kubenswrapper[4858]: I0127 20:39:49.522373 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerID="fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55" exitCode=0 Jan 27 20:39:49 crc kubenswrapper[4858]: I0127 20:39:49.522480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9fh8" event={"ID":"9ed492f2-4fda-47a1-91a2-3586df3dd04b","Type":"ContainerDied","Data":"fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55"} Jan 27 20:39:49 crc kubenswrapper[4858]: I0127 20:39:49.537066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8tkk" event={"ID":"7acc1224-6517-4394-b1c7-7da935017a74","Type":"ContainerStarted","Data":"f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b"} Jan 27 20:39:49 crc kubenswrapper[4858]: I0127 20:39:49.578066 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h8tkk" podStartSLOduration=2.120161303 podStartE2EDuration="4.578040588s" podCreationTimestamp="2026-01-27 20:39:45 +0000 UTC" firstStartedPulling="2026-01-27 20:39:46.465020395 +0000 UTC m=+1931.172836101" lastFinishedPulling="2026-01-27 20:39:48.92289964 +0000 UTC m=+1933.630715386" observedRunningTime="2026-01-27 20:39:49.575776455 +0000 UTC m=+1934.283592161" watchObservedRunningTime="2026-01-27 20:39:49.578040588 +0000 UTC m=+1934.285856294" Jan 27 20:39:50 crc kubenswrapper[4858]: I0127 20:39:50.548750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9fh8" event={"ID":"9ed492f2-4fda-47a1-91a2-3586df3dd04b","Type":"ContainerStarted","Data":"ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787"} Jan 27 20:39:51 crc kubenswrapper[4858]: I0127 20:39:51.563282 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerID="ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787" exitCode=0 Jan 27 20:39:51 crc kubenswrapper[4858]: I0127 20:39:51.563344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9fh8" event={"ID":"9ed492f2-4fda-47a1-91a2-3586df3dd04b","Type":"ContainerDied","Data":"ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787"} Jan 27 20:39:52 crc kubenswrapper[4858]: I0127 20:39:52.031708 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jlwch"] Jan 27 20:39:52 crc kubenswrapper[4858]: I0127 20:39:52.042826 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jlwch"] Jan 27 20:39:52 crc kubenswrapper[4858]: I0127 20:39:52.083278 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="734f1877-8907-44ff-b8af-c1a5f1b1395d" path="/var/lib/kubelet/pods/734f1877-8907-44ff-b8af-c1a5f1b1395d/volumes" Jan 27 20:39:52 crc kubenswrapper[4858]: I0127 20:39:52.596143 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9fh8" event={"ID":"9ed492f2-4fda-47a1-91a2-3586df3dd04b","Type":"ContainerStarted","Data":"fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322"} Jan 27 20:39:52 crc kubenswrapper[4858]: I0127 20:39:52.630313 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w9fh8" podStartSLOduration=2.889590294 podStartE2EDuration="5.630275337s" podCreationTimestamp="2026-01-27 20:39:47 +0000 UTC" firstStartedPulling="2026-01-27 20:39:49.524709612 +0000 UTC m=+1934.232525328" lastFinishedPulling="2026-01-27 20:39:52.265394665 +0000 UTC m=+1936.973210371" observedRunningTime="2026-01-27 20:39:52.620741889 +0000 UTC m=+1937.328557615" watchObservedRunningTime="2026-01-27 20:39:52.630275337 +0000 UTC m=+1937.338091033" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.060778 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ltx6z"] Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.108993 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ltx6z"] Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.238248 4858 scope.go:117] "RemoveContainer" containerID="fac4154d4462e64acb42205823e1c5870f70bcdc77728c1f41d181a934b9b634" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.320002 4858 scope.go:117] "RemoveContainer" containerID="8c6f297041903903e71142d5906feb61449bc030914e1952791d9034cf5285ef" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.404267 4858 scope.go:117] "RemoveContainer" containerID="7bea31aa778d39822f81cd4a51e08e662f3c7eadbf65b96acd6fc50dde7ef951" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.466890 4858 scope.go:117] "RemoveContainer" containerID="57d3595fa18d7acd981296d10cd97feb9df3ad70d3e95e9fbff8aa547bcd1155" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.525998 4858 scope.go:117] "RemoveContainer" containerID="4ce9da19d3b4a24b22a6cbac46ac50f7eb358a3795641085303a4e9443f58bd5" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.616965 4858 scope.go:117] "RemoveContainer" containerID="331471801af9273ef2d4ae91fd9140271c70e1c3fa51cc85bd69670ec96a4e1c" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.656801 4858 scope.go:117] "RemoveContainer" containerID="e0a00ff742d3e6d9379a6633640d3cab5ae790117688d92bbce7853bda95f6ef" Jan 27 20:39:54 crc kubenswrapper[4858]: I0127 20:39:54.685900 4858 scope.go:117] "RemoveContainer" containerID="3ffa5f2a89f0c71aede6a2ac0b7aacedee2d339bc4b07dcd4315f58570ccf22b" Jan 27 20:39:55 crc kubenswrapper[4858]: I0127 20:39:55.403283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:55 crc kubenswrapper[4858]: I0127 20:39:55.404428 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:55 crc kubenswrapper[4858]: I0127 20:39:55.462031 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:55 crc kubenswrapper[4858]: I0127 20:39:55.709834 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:56 crc kubenswrapper[4858]: I0127 20:39:56.083247 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dae3012-914a-4fdc-81b0-23dc98627b05" path="/var/lib/kubelet/pods/8dae3012-914a-4fdc-81b0-23dc98627b05/volumes" Jan 27 20:39:56 crc kubenswrapper[4858]: I0127 20:39:56.253435 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8tkk"] Jan 27 20:39:57 crc kubenswrapper[4858]: I0127 20:39:57.675838 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h8tkk" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="registry-server" containerID="cri-o://f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b" gracePeriod=2 Jan 27 20:39:57 crc kubenswrapper[4858]: I0127 20:39:57.683329 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:57 crc kubenswrapper[4858]: I0127 20:39:57.683536 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:57 crc kubenswrapper[4858]: I0127 20:39:57.749133 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.122219 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.265520 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gplbx\" (UniqueName: \"kubernetes.io/projected/7acc1224-6517-4394-b1c7-7da935017a74-kube-api-access-gplbx\") pod \"7acc1224-6517-4394-b1c7-7da935017a74\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.265853 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-catalog-content\") pod \"7acc1224-6517-4394-b1c7-7da935017a74\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.265913 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-utilities\") pod \"7acc1224-6517-4394-b1c7-7da935017a74\" (UID: \"7acc1224-6517-4394-b1c7-7da935017a74\") " Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.267050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-utilities" (OuterVolumeSpecName: "utilities") pod "7acc1224-6517-4394-b1c7-7da935017a74" (UID: "7acc1224-6517-4394-b1c7-7da935017a74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.268147 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.275889 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acc1224-6517-4394-b1c7-7da935017a74-kube-api-access-gplbx" (OuterVolumeSpecName: "kube-api-access-gplbx") pod "7acc1224-6517-4394-b1c7-7da935017a74" (UID: "7acc1224-6517-4394-b1c7-7da935017a74"). InnerVolumeSpecName "kube-api-access-gplbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.362769 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7acc1224-6517-4394-b1c7-7da935017a74" (UID: "7acc1224-6517-4394-b1c7-7da935017a74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.372403 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gplbx\" (UniqueName: \"kubernetes.io/projected/7acc1224-6517-4394-b1c7-7da935017a74-kube-api-access-gplbx\") on node \"crc\" DevicePath \"\"" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.372463 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acc1224-6517-4394-b1c7-7da935017a74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.688395 4858 generic.go:334] "Generic (PLEG): container finished" podID="7acc1224-6517-4394-b1c7-7da935017a74" containerID="f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b" exitCode=0 Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.688455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h8tkk" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.688455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8tkk" event={"ID":"7acc1224-6517-4394-b1c7-7da935017a74","Type":"ContainerDied","Data":"f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b"} Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.688505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h8tkk" event={"ID":"7acc1224-6517-4394-b1c7-7da935017a74","Type":"ContainerDied","Data":"970f5d9a7245cb9ef1417756f474939057ecccacce6390b8ed27f10b97f6675b"} Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.688529 4858 scope.go:117] "RemoveContainer" containerID="f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.729274 4858 scope.go:117] "RemoveContainer" containerID="74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.734777 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8tkk"] Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.741973 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h8tkk"] Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.754896 4858 scope.go:117] "RemoveContainer" containerID="a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.762222 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.812826 4858 scope.go:117] "RemoveContainer" containerID="f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b" Jan 27 20:39:58 crc kubenswrapper[4858]: E0127 20:39:58.813349 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b\": container with ID starting with f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b not found: ID does not exist" containerID="f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.813383 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b"} err="failed to get container status \"f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b\": rpc error: code = NotFound desc = could not find container \"f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b\": container with ID starting with f2ccc844dd4c0e20cb5fdbd3359af8a0bd12818090990844062a48d1c067902b not found: ID does not exist" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.813409 4858 scope.go:117] "RemoveContainer" containerID="74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44" Jan 27 20:39:58 crc kubenswrapper[4858]: E0127 20:39:58.813770 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44\": container with ID starting with 74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44 not found: ID does not exist" containerID="74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.813792 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44"} err="failed to get container status \"74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44\": rpc error: code = NotFound desc = could not find container \"74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44\": container with ID starting with 74c37d1f452855c4e2ad6bfd2ed01d69d06050d1ca2ba884cb096a039bef8b44 not found: ID does not exist" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.813806 4858 scope.go:117] "RemoveContainer" containerID="a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8" Jan 27 20:39:58 crc kubenswrapper[4858]: E0127 20:39:58.814111 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8\": container with ID starting with a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8 not found: ID does not exist" containerID="a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8" Jan 27 20:39:58 crc kubenswrapper[4858]: I0127 20:39:58.814136 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8"} err="failed to get container status \"a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8\": rpc error: code = NotFound desc = could not find container \"a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8\": container with ID starting with a918350e11ac9ffa93b4d60e1ae8ea2fd9864355975b1bd0576e2662220970c8 not found: ID does not exist" Jan 27 20:40:00 crc kubenswrapper[4858]: I0127 20:40:00.086542 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acc1224-6517-4394-b1c7-7da935017a74" path="/var/lib/kubelet/pods/7acc1224-6517-4394-b1c7-7da935017a74/volumes" Jan 27 20:40:01 crc kubenswrapper[4858]: I0127 20:40:01.485768 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9fh8"] Jan 27 20:40:02 crc kubenswrapper[4858]: I0127 20:40:02.475148 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w9fh8" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="registry-server" containerID="cri-o://fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322" gracePeriod=2 Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.004775 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.095748 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zznbt\" (UniqueName: \"kubernetes.io/projected/9ed492f2-4fda-47a1-91a2-3586df3dd04b-kube-api-access-zznbt\") pod \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.096044 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-catalog-content\") pod \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.096164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-utilities\") pod \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\" (UID: \"9ed492f2-4fda-47a1-91a2-3586df3dd04b\") " Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.097048 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-utilities" (OuterVolumeSpecName: "utilities") pod "9ed492f2-4fda-47a1-91a2-3586df3dd04b" (UID: "9ed492f2-4fda-47a1-91a2-3586df3dd04b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.098100 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.115038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed492f2-4fda-47a1-91a2-3586df3dd04b-kube-api-access-zznbt" (OuterVolumeSpecName: "kube-api-access-zznbt") pod "9ed492f2-4fda-47a1-91a2-3586df3dd04b" (UID: "9ed492f2-4fda-47a1-91a2-3586df3dd04b"). InnerVolumeSpecName "kube-api-access-zznbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.155762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ed492f2-4fda-47a1-91a2-3586df3dd04b" (UID: "9ed492f2-4fda-47a1-91a2-3586df3dd04b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.200271 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zznbt\" (UniqueName: \"kubernetes.io/projected/9ed492f2-4fda-47a1-91a2-3586df3dd04b-kube-api-access-zznbt\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.200328 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ed492f2-4fda-47a1-91a2-3586df3dd04b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.485847 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerID="fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322" exitCode=0 Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.485925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9fh8" event={"ID":"9ed492f2-4fda-47a1-91a2-3586df3dd04b","Type":"ContainerDied","Data":"fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322"} Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.486399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9fh8" event={"ID":"9ed492f2-4fda-47a1-91a2-3586df3dd04b","Type":"ContainerDied","Data":"6c9aebf5d7fd7d10271798818ebeb8b0eb4decf76ee50a82e7da6981048e154a"} Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.485946 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9fh8" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.486440 4858 scope.go:117] "RemoveContainer" containerID="fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.507636 4858 scope.go:117] "RemoveContainer" containerID="ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.528799 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9fh8"] Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.538828 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w9fh8"] Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.561928 4858 scope.go:117] "RemoveContainer" containerID="fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.590439 4858 scope.go:117] "RemoveContainer" containerID="fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322" Jan 27 20:40:03 crc kubenswrapper[4858]: E0127 20:40:03.590905 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322\": container with ID starting with fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322 not found: ID does not exist" containerID="fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.590947 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322"} err="failed to get container status \"fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322\": rpc error: code = NotFound desc = could not find container \"fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322\": container with ID starting with fd4d4df3a8df7477ff72da32e0ca79547912ef6fa97230f58ba0341439e5a322 not found: ID does not exist" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.590972 4858 scope.go:117] "RemoveContainer" containerID="ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787" Jan 27 20:40:03 crc kubenswrapper[4858]: E0127 20:40:03.591277 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787\": container with ID starting with ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787 not found: ID does not exist" containerID="ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.591303 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787"} err="failed to get container status \"ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787\": rpc error: code = NotFound desc = could not find container \"ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787\": container with ID starting with ea5404c3992031bce74056cb6e6bf28d89defa5b5d9643d29a55478aee69c787 not found: ID does not exist" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.591318 4858 scope.go:117] "RemoveContainer" containerID="fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55" Jan 27 20:40:03 crc kubenswrapper[4858]: E0127 20:40:03.591528 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55\": container with ID starting with fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55 not found: ID does not exist" containerID="fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55" Jan 27 20:40:03 crc kubenswrapper[4858]: I0127 20:40:03.591583 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55"} err="failed to get container status \"fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55\": rpc error: code = NotFound desc = could not find container \"fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55\": container with ID starting with fe7e12807238eedd1ca36829b17c6fab88525efe7f228258c903ae6eae360b55 not found: ID does not exist" Jan 27 20:40:04 crc kubenswrapper[4858]: I0127 20:40:04.090225 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" path="/var/lib/kubelet/pods/9ed492f2-4fda-47a1-91a2-3586df3dd04b/volumes" Jan 27 20:40:14 crc kubenswrapper[4858]: I0127 20:40:14.088631 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6n2n5"] Jan 27 20:40:14 crc kubenswrapper[4858]: I0127 20:40:14.105177 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gc4mg"] Jan 27 20:40:14 crc kubenswrapper[4858]: I0127 20:40:14.115398 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6n2n5"] Jan 27 20:40:14 crc kubenswrapper[4858]: I0127 20:40:14.123777 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gc4mg"] Jan 27 20:40:14 crc kubenswrapper[4858]: I0127 20:40:14.132964 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nsgb9"] Jan 27 20:40:14 crc kubenswrapper[4858]: I0127 20:40:14.141572 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nsgb9"] Jan 27 20:40:16 crc kubenswrapper[4858]: I0127 20:40:16.086616 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="047f39f4-e397-46e4-a998-4bf8060a1114" path="/var/lib/kubelet/pods/047f39f4-e397-46e4-a998-4bf8060a1114/volumes" Jan 27 20:40:16 crc kubenswrapper[4858]: I0127 20:40:16.087753 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8222b78c-e8de-4992-8c5b-bcf030d629ff" path="/var/lib/kubelet/pods/8222b78c-e8de-4992-8c5b-bcf030d629ff/volumes" Jan 27 20:40:16 crc kubenswrapper[4858]: I0127 20:40:16.088612 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c7b1cd-a2a1-4bd2-a57c-715448327967" path="/var/lib/kubelet/pods/b7c7b1cd-a2a1-4bd2-a57c-715448327967/volumes" Jan 27 20:40:40 crc kubenswrapper[4858]: I0127 20:40:40.875955 4858 generic.go:334] "Generic (PLEG): container finished" podID="d59ffd9a-001c-400a-b79b-4617489956ed" containerID="4302e8c8e5910090099f7027c859b10c1b7381a7b11e61dd4b261ede33f706b3" exitCode=0 Jan 27 20:40:40 crc kubenswrapper[4858]: I0127 20:40:40.876116 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" event={"ID":"d59ffd9a-001c-400a-b79b-4617489956ed","Type":"ContainerDied","Data":"4302e8c8e5910090099f7027c859b10c1b7381a7b11e61dd4b261ede33f706b3"} Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.421287 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.534126 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m49b\" (UniqueName: \"kubernetes.io/projected/d59ffd9a-001c-400a-b79b-4617489956ed-kube-api-access-7m49b\") pod \"d59ffd9a-001c-400a-b79b-4617489956ed\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.534714 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-ssh-key-openstack-edpm-ipam\") pod \"d59ffd9a-001c-400a-b79b-4617489956ed\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.534767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-inventory\") pod \"d59ffd9a-001c-400a-b79b-4617489956ed\" (UID: \"d59ffd9a-001c-400a-b79b-4617489956ed\") " Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.550390 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59ffd9a-001c-400a-b79b-4617489956ed-kube-api-access-7m49b" (OuterVolumeSpecName: "kube-api-access-7m49b") pod "d59ffd9a-001c-400a-b79b-4617489956ed" (UID: "d59ffd9a-001c-400a-b79b-4617489956ed"). InnerVolumeSpecName "kube-api-access-7m49b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.567767 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d59ffd9a-001c-400a-b79b-4617489956ed" (UID: "d59ffd9a-001c-400a-b79b-4617489956ed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.569074 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-inventory" (OuterVolumeSpecName: "inventory") pod "d59ffd9a-001c-400a-b79b-4617489956ed" (UID: "d59ffd9a-001c-400a-b79b-4617489956ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.637151 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m49b\" (UniqueName: \"kubernetes.io/projected/d59ffd9a-001c-400a-b79b-4617489956ed-kube-api-access-7m49b\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.637189 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.637205 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d59ffd9a-001c-400a-b79b-4617489956ed-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.896876 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" event={"ID":"d59ffd9a-001c-400a-b79b-4617489956ed","Type":"ContainerDied","Data":"db635799032754e074c9de369e68c233e93cb25a85bc34d976e40ca1c68c2d40"} Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.896933 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db635799032754e074c9de369e68c233e93cb25a85bc34d976e40ca1c68c2d40" Jan 27 20:40:42 crc kubenswrapper[4858]: I0127 20:40:42.896959 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.000792 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v"] Jan 27 20:40:43 crc kubenswrapper[4858]: E0127 20:40:43.001645 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="registry-server" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.001785 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="registry-server" Jan 27 20:40:43 crc kubenswrapper[4858]: E0127 20:40:43.001905 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="extract-content" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.001991 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="extract-content" Jan 27 20:40:43 crc kubenswrapper[4858]: E0127 20:40:43.002075 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59ffd9a-001c-400a-b79b-4617489956ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.002157 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59ffd9a-001c-400a-b79b-4617489956ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 20:40:43 crc kubenswrapper[4858]: E0127 20:40:43.002247 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="extract-utilities" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.002326 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="extract-utilities" Jan 27 20:40:43 crc kubenswrapper[4858]: E0127 20:40:43.002416 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="extract-content" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.002503 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="extract-content" Jan 27 20:40:43 crc kubenswrapper[4858]: E0127 20:40:43.002623 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="registry-server" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.002709 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="registry-server" Jan 27 20:40:43 crc kubenswrapper[4858]: E0127 20:40:43.002803 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="extract-utilities" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.002886 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="extract-utilities" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.003196 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed492f2-4fda-47a1-91a2-3586df3dd04b" containerName="registry-server" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.003338 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59ffd9a-001c-400a-b79b-4617489956ed" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.003430 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acc1224-6517-4394-b1c7-7da935017a74" containerName="registry-server" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.004507 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.008979 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.009154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.009150 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.009752 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.017524 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v"] Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.146709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.146848 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.146969 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtgr\" (UniqueName: \"kubernetes.io/projected/8f14a76f-9e03-4695-98e5-c1efe11ae337-kube-api-access-wgtgr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.249046 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.249255 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.249460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtgr\" (UniqueName: \"kubernetes.io/projected/8f14a76f-9e03-4695-98e5-c1efe11ae337-kube-api-access-wgtgr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.256811 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.258346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.291539 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtgr\" (UniqueName: \"kubernetes.io/projected/8f14a76f-9e03-4695-98e5-c1efe11ae337-kube-api-access-wgtgr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hf82v\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.340645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:40:43 crc kubenswrapper[4858]: I0127 20:40:43.977171 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v"] Jan 27 20:40:44 crc kubenswrapper[4858]: I0127 20:40:44.917128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" event={"ID":"8f14a76f-9e03-4695-98e5-c1efe11ae337","Type":"ContainerStarted","Data":"259a50d5fe9df0cb3edb5d776fbb47b19635b8957e57874d1511904d95cbcf55"} Jan 27 20:40:45 crc kubenswrapper[4858]: I0127 20:40:45.930572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" event={"ID":"8f14a76f-9e03-4695-98e5-c1efe11ae337","Type":"ContainerStarted","Data":"8c401ad7e618a36bbf02199a586a930f86a69508305660d5a1b583491d42fa55"} Jan 27 20:40:45 crc kubenswrapper[4858]: I0127 20:40:45.972328 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" podStartSLOduration=3.335668853 podStartE2EDuration="3.972300932s" podCreationTimestamp="2026-01-27 20:40:42 +0000 UTC" firstStartedPulling="2026-01-27 20:40:43.990858529 +0000 UTC m=+1988.698674235" lastFinishedPulling="2026-01-27 20:40:44.627490608 +0000 UTC m=+1989.335306314" observedRunningTime="2026-01-27 20:40:45.947186227 +0000 UTC m=+1990.655001953" watchObservedRunningTime="2026-01-27 20:40:45.972300932 +0000 UTC m=+1990.680116658" Jan 27 20:40:54 crc kubenswrapper[4858]: I0127 20:40:54.953387 4858 scope.go:117] "RemoveContainer" containerID="3405fccebf6de7872af9821078dd1c457d05027ef7264c9c69ead0bb38bec513" Jan 27 20:40:54 crc kubenswrapper[4858]: I0127 20:40:54.995239 4858 scope.go:117] "RemoveContainer" containerID="cb1fcfbc38322e9f89cec41c1db7af41b384db137a28f509ce0209026038b3d1" Jan 27 20:40:55 crc kubenswrapper[4858]: I0127 20:40:55.061067 4858 scope.go:117] "RemoveContainer" containerID="c242687079d8393bf6b627b57754e87a8068b99262fd51b668befa18f96d68b9" Jan 27 20:40:55 crc kubenswrapper[4858]: I0127 20:40:55.125352 4858 scope.go:117] "RemoveContainer" containerID="a48d4202a2867e87a32c9e97495a3047369823ace0126b45c61d27e9af6d4c1e" Jan 27 20:40:59 crc kubenswrapper[4858]: I0127 20:40:59.328966 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:40:59 crc kubenswrapper[4858]: I0127 20:40:59.329732 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.048100 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ce78-account-create-update-597dz"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.061187 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d500-account-create-update-nh98x"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.075259 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hns4w"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.085125 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-2drzb"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.094330 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ce78-account-create-update-597dz"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.102342 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zhg58"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.111939 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d500-account-create-update-nh98x"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.122058 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zhg58"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.130335 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-2drzb"] Jan 27 20:41:07 crc kubenswrapper[4858]: I0127 20:41:07.139764 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hns4w"] Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.044986 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-48b0-account-create-update-gwf9m"] Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.057581 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-48b0-account-create-update-gwf9m"] Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.084486 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f669f0-2cb1-47cb-b220-758381725229" path="/var/lib/kubelet/pods/27f669f0-2cb1-47cb-b220-758381725229/volumes" Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.085279 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="358dc567-a5bf-4e80-843c-dabb5e3535e2" path="/var/lib/kubelet/pods/358dc567-a5bf-4e80-843c-dabb5e3535e2/volumes" Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.085876 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63cacc1d-9f19-4bc5-aec0-93d97976666a" path="/var/lib/kubelet/pods/63cacc1d-9f19-4bc5-aec0-93d97976666a/volumes" Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.086440 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db9d22f-78a1-402c-abda-87f2f6fe1a3d" path="/var/lib/kubelet/pods/7db9d22f-78a1-402c-abda-87f2f6fe1a3d/volumes" Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.088056 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a" path="/var/lib/kubelet/pods/8ddb8b6a-3908-4e9d-b0af-1c8d6fa93a7a/volumes" Jan 27 20:41:08 crc kubenswrapper[4858]: I0127 20:41:08.088604 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90043c41-134e-47fc-9086-1c45a761a7c0" path="/var/lib/kubelet/pods/90043c41-134e-47fc-9086-1c45a761a7c0/volumes" Jan 27 20:41:29 crc kubenswrapper[4858]: I0127 20:41:29.328778 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:41:29 crc kubenswrapper[4858]: I0127 20:41:29.329377 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:41:42 crc kubenswrapper[4858]: I0127 20:41:42.047089 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qzc9m"] Jan 27 20:41:42 crc kubenswrapper[4858]: I0127 20:41:42.060074 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qzc9m"] Jan 27 20:41:42 crc kubenswrapper[4858]: I0127 20:41:42.085023 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d9977d-c6e5-4534-8e26-4da3b22c6cb8" path="/var/lib/kubelet/pods/f9d9977d-c6e5-4534-8e26-4da3b22c6cb8/volumes" Jan 27 20:41:55 crc kubenswrapper[4858]: I0127 20:41:55.292190 4858 scope.go:117] "RemoveContainer" containerID="eb5100e4a93531ab529d3150eaa996104168f2c33232ff5df0d38f045f7ba3d6" Jan 27 20:41:55 crc kubenswrapper[4858]: I0127 20:41:55.325045 4858 scope.go:117] "RemoveContainer" containerID="e6d409e57f83c08b66daf0b6b2a005d52a8c2b300559ee15dbf01d93994e5be8" Jan 27 20:41:55 crc kubenswrapper[4858]: I0127 20:41:55.384144 4858 scope.go:117] "RemoveContainer" containerID="21765a91083515e21d827dc54889afb0d4c75fbfb63a562393b6904329efe205" Jan 27 20:41:55 crc kubenswrapper[4858]: I0127 20:41:55.442663 4858 scope.go:117] "RemoveContainer" containerID="3f6ac5ceb752f3a0a892b9a9879c0652b4f812af34ad8dc8501d076dc168a8df" Jan 27 20:41:55 crc kubenswrapper[4858]: I0127 20:41:55.499500 4858 scope.go:117] "RemoveContainer" containerID="2329bced1af7a87ac45cdcf4efc6e92dac2e6eea9f56553643c19521cc365877" Jan 27 20:41:55 crc kubenswrapper[4858]: I0127 20:41:55.545055 4858 scope.go:117] "RemoveContainer" containerID="e74b260e6272aff3b80500e6fc6354a9ea5a7fef0b64ff0d6f6f8e8a2136bde4" Jan 27 20:41:55 crc kubenswrapper[4858]: I0127 20:41:55.590916 4858 scope.go:117] "RemoveContainer" containerID="dcab2712707ca3ec52cd765d2937fd3cca53a446e8546db49beb61ebd167bc94" Jan 27 20:41:56 crc kubenswrapper[4858]: I0127 20:41:56.692611 4858 generic.go:334] "Generic (PLEG): container finished" podID="8f14a76f-9e03-4695-98e5-c1efe11ae337" containerID="8c401ad7e618a36bbf02199a586a930f86a69508305660d5a1b583491d42fa55" exitCode=0 Jan 27 20:41:56 crc kubenswrapper[4858]: I0127 20:41:56.692698 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" event={"ID":"8f14a76f-9e03-4695-98e5-c1efe11ae337","Type":"ContainerDied","Data":"8c401ad7e618a36bbf02199a586a930f86a69508305660d5a1b583491d42fa55"} Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.234891 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.328532 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-ssh-key-openstack-edpm-ipam\") pod \"8f14a76f-9e03-4695-98e5-c1efe11ae337\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.328695 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-inventory\") pod \"8f14a76f-9e03-4695-98e5-c1efe11ae337\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.328854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgtgr\" (UniqueName: \"kubernetes.io/projected/8f14a76f-9e03-4695-98e5-c1efe11ae337-kube-api-access-wgtgr\") pod \"8f14a76f-9e03-4695-98e5-c1efe11ae337\" (UID: \"8f14a76f-9e03-4695-98e5-c1efe11ae337\") " Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.347763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f14a76f-9e03-4695-98e5-c1efe11ae337-kube-api-access-wgtgr" (OuterVolumeSpecName: "kube-api-access-wgtgr") pod "8f14a76f-9e03-4695-98e5-c1efe11ae337" (UID: "8f14a76f-9e03-4695-98e5-c1efe11ae337"). InnerVolumeSpecName "kube-api-access-wgtgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.370582 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-inventory" (OuterVolumeSpecName: "inventory") pod "8f14a76f-9e03-4695-98e5-c1efe11ae337" (UID: "8f14a76f-9e03-4695-98e5-c1efe11ae337"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.370895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8f14a76f-9e03-4695-98e5-c1efe11ae337" (UID: "8f14a76f-9e03-4695-98e5-c1efe11ae337"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.432374 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgtgr\" (UniqueName: \"kubernetes.io/projected/8f14a76f-9e03-4695-98e5-c1efe11ae337-kube-api-access-wgtgr\") on node \"crc\" DevicePath \"\"" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.432429 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.432441 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f14a76f-9e03-4695-98e5-c1efe11ae337-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.716223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" event={"ID":"8f14a76f-9e03-4695-98e5-c1efe11ae337","Type":"ContainerDied","Data":"259a50d5fe9df0cb3edb5d776fbb47b19635b8957e57874d1511904d95cbcf55"} Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.716287 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259a50d5fe9df0cb3edb5d776fbb47b19635b8957e57874d1511904d95cbcf55" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.716645 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hf82v" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.848508 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk"] Jan 27 20:41:58 crc kubenswrapper[4858]: E0127 20:41:58.849139 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f14a76f-9e03-4695-98e5-c1efe11ae337" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.849186 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f14a76f-9e03-4695-98e5-c1efe11ae337" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.849471 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f14a76f-9e03-4695-98e5-c1efe11ae337" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.850609 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.853458 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.853932 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.854153 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.854366 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.860418 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk"] Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.945272 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.945388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:58 crc kubenswrapper[4858]: I0127 20:41:58.945431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p448w\" (UniqueName: \"kubernetes.io/projected/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-kube-api-access-p448w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.047718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.047807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p448w\" (UniqueName: \"kubernetes.io/projected/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-kube-api-access-p448w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.048057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.057057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.060338 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.076858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p448w\" (UniqueName: \"kubernetes.io/projected/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-kube-api-access-p448w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.173740 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.329300 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.329375 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.329430 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.330410 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"336e4dbda5f2330cb97a3401d43a535416bd6411da7f0e5d5731c4398198a98c"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.330481 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://336e4dbda5f2330cb97a3401d43a535416bd6411da7f0e5d5731c4398198a98c" gracePeriod=600 Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.739717 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="336e4dbda5f2330cb97a3401d43a535416bd6411da7f0e5d5731c4398198a98c" exitCode=0 Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.739801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"336e4dbda5f2330cb97a3401d43a535416bd6411da7f0e5d5731c4398198a98c"} Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.740137 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710"} Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.740169 4858 scope.go:117] "RemoveContainer" containerID="759afc97c87c171566e89967116620e28b65947c9dba26fc560c17847b8d44f8" Jan 27 20:41:59 crc kubenswrapper[4858]: I0127 20:41:59.791776 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk"] Jan 27 20:41:59 crc kubenswrapper[4858]: W0127 20:41:59.818500 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29f6143e_1aa7_4d0f_91ce_267d3e2fe84e.slice/crio-bd6253be1098bfda24bc75203558de2d30c1fcb991554971b05bcadf730eae9a WatchSource:0}: Error finding container bd6253be1098bfda24bc75203558de2d30c1fcb991554971b05bcadf730eae9a: Status 404 returned error can't find the container with id bd6253be1098bfda24bc75203558de2d30c1fcb991554971b05bcadf730eae9a Jan 27 20:42:00 crc kubenswrapper[4858]: I0127 20:42:00.754497 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" event={"ID":"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e","Type":"ContainerStarted","Data":"02cb4e51f3c4a6c704c6ef2d4ac96e1100e6ea6a25c6eecf89abb8c5b43bca4d"} Jan 27 20:42:00 crc kubenswrapper[4858]: I0127 20:42:00.755289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" event={"ID":"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e","Type":"ContainerStarted","Data":"bd6253be1098bfda24bc75203558de2d30c1fcb991554971b05bcadf730eae9a"} Jan 27 20:42:00 crc kubenswrapper[4858]: I0127 20:42:00.784506 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" podStartSLOduration=2.155200501 podStartE2EDuration="2.784478702s" podCreationTimestamp="2026-01-27 20:41:58 +0000 UTC" firstStartedPulling="2026-01-27 20:41:59.822858293 +0000 UTC m=+2064.530674019" lastFinishedPulling="2026-01-27 20:42:00.452136504 +0000 UTC m=+2065.159952220" observedRunningTime="2026-01-27 20:42:00.772461655 +0000 UTC m=+2065.480277381" watchObservedRunningTime="2026-01-27 20:42:00.784478702 +0000 UTC m=+2065.492294428" Jan 27 20:42:06 crc kubenswrapper[4858]: I0127 20:42:06.817876 4858 generic.go:334] "Generic (PLEG): container finished" podID="29f6143e-1aa7-4d0f-91ce-267d3e2fe84e" containerID="02cb4e51f3c4a6c704c6ef2d4ac96e1100e6ea6a25c6eecf89abb8c5b43bca4d" exitCode=0 Jan 27 20:42:06 crc kubenswrapper[4858]: I0127 20:42:06.818671 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" event={"ID":"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e","Type":"ContainerDied","Data":"02cb4e51f3c4a6c704c6ef2d4ac96e1100e6ea6a25c6eecf89abb8c5b43bca4d"} Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.194718 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwhrn"] Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.207684 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.221754 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwhrn"] Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.308864 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.330913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnqq\" (UniqueName: \"kubernetes.io/projected/2b5098a1-6553-43e0-80fb-aae744353a50-kube-api-access-glnqq\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.331033 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-utilities\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.331111 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-catalog-content\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.432304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-inventory\") pod \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.432360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-ssh-key-openstack-edpm-ipam\") pod \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.432667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p448w\" (UniqueName: \"kubernetes.io/projected/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-kube-api-access-p448w\") pod \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\" (UID: \"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e\") " Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.432998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-utilities\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.433090 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-catalog-content\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.433119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glnqq\" (UniqueName: \"kubernetes.io/projected/2b5098a1-6553-43e0-80fb-aae744353a50-kube-api-access-glnqq\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.433861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-utilities\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.434077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-catalog-content\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.443732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-kube-api-access-p448w" (OuterVolumeSpecName: "kube-api-access-p448w") pod "29f6143e-1aa7-4d0f-91ce-267d3e2fe84e" (UID: "29f6143e-1aa7-4d0f-91ce-267d3e2fe84e"). InnerVolumeSpecName "kube-api-access-p448w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.480182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29f6143e-1aa7-4d0f-91ce-267d3e2fe84e" (UID: "29f6143e-1aa7-4d0f-91ce-267d3e2fe84e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.484740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-inventory" (OuterVolumeSpecName: "inventory") pod "29f6143e-1aa7-4d0f-91ce-267d3e2fe84e" (UID: "29f6143e-1aa7-4d0f-91ce-267d3e2fe84e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.493759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnqq\" (UniqueName: \"kubernetes.io/projected/2b5098a1-6553-43e0-80fb-aae744353a50-kube-api-access-glnqq\") pod \"redhat-operators-cwhrn\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.535375 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.535423 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.535436 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p448w\" (UniqueName: \"kubernetes.io/projected/29f6143e-1aa7-4d0f-91ce-267d3e2fe84e-kube-api-access-p448w\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.630848 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.850283 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" event={"ID":"29f6143e-1aa7-4d0f-91ce-267d3e2fe84e","Type":"ContainerDied","Data":"bd6253be1098bfda24bc75203558de2d30c1fcb991554971b05bcadf730eae9a"} Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.850722 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6253be1098bfda24bc75203558de2d30c1fcb991554971b05bcadf730eae9a" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.850596 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.940068 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd"] Jan 27 20:42:08 crc kubenswrapper[4858]: E0127 20:42:08.940690 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f6143e-1aa7-4d0f-91ce-267d3e2fe84e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.940709 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f6143e-1aa7-4d0f-91ce-267d3e2fe84e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.941114 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f6143e-1aa7-4d0f-91ce-267d3e2fe84e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.942044 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.952325 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.952322 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.952594 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.952776 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:42:08 crc kubenswrapper[4858]: I0127 20:42:08.976697 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd"] Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.045132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/78e25299-cf17-451b-8f2f-d980ff184dac-kube-api-access-r9n2z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.045192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.045294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.147012 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/78e25299-cf17-451b-8f2f-d980ff184dac-kube-api-access-r9n2z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.147363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.147456 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.154422 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.160920 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.166186 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/78e25299-cf17-451b-8f2f-d980ff184dac-kube-api-access-r9n2z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lzndd\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.220352 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwhrn"] Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.283884 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.863532 4858 generic.go:334] "Generic (PLEG): container finished" podID="2b5098a1-6553-43e0-80fb-aae744353a50" containerID="1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de" exitCode=0 Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.863726 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwhrn" event={"ID":"2b5098a1-6553-43e0-80fb-aae744353a50","Type":"ContainerDied","Data":"1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de"} Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.864067 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwhrn" event={"ID":"2b5098a1-6553-43e0-80fb-aae744353a50","Type":"ContainerStarted","Data":"288b992b30f2a611e2cf2d101de3f3dfea611261e8348ae42a36458a59fb5b9e"} Jan 27 20:42:09 crc kubenswrapper[4858]: I0127 20:42:09.897002 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd"] Jan 27 20:42:09 crc kubenswrapper[4858]: W0127 20:42:09.910763 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78e25299_cf17_451b_8f2f_d980ff184dac.slice/crio-2eda9fa2fcab9a28493e8645579a64b990e489232bf1815acaf3197be9722e93 WatchSource:0}: Error finding container 2eda9fa2fcab9a28493e8645579a64b990e489232bf1815acaf3197be9722e93: Status 404 returned error can't find the container with id 2eda9fa2fcab9a28493e8645579a64b990e489232bf1815acaf3197be9722e93 Jan 27 20:42:10 crc kubenswrapper[4858]: I0127 20:42:10.876693 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" event={"ID":"78e25299-cf17-451b-8f2f-d980ff184dac","Type":"ContainerStarted","Data":"60abd87362a0a1085d0825fe7ac662c38a1e70a2ddbd4c6a7d87d4be652f486e"} Jan 27 20:42:10 crc kubenswrapper[4858]: I0127 20:42:10.877885 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" event={"ID":"78e25299-cf17-451b-8f2f-d980ff184dac","Type":"ContainerStarted","Data":"2eda9fa2fcab9a28493e8645579a64b990e489232bf1815acaf3197be9722e93"} Jan 27 20:42:10 crc kubenswrapper[4858]: I0127 20:42:10.887112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwhrn" event={"ID":"2b5098a1-6553-43e0-80fb-aae744353a50","Type":"ContainerStarted","Data":"7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f"} Jan 27 20:42:10 crc kubenswrapper[4858]: I0127 20:42:10.916477 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" podStartSLOduration=2.415804385 podStartE2EDuration="2.916451877s" podCreationTimestamp="2026-01-27 20:42:08 +0000 UTC" firstStartedPulling="2026-01-27 20:42:09.915565595 +0000 UTC m=+2074.623381301" lastFinishedPulling="2026-01-27 20:42:10.416213097 +0000 UTC m=+2075.124028793" observedRunningTime="2026-01-27 20:42:10.909715648 +0000 UTC m=+2075.617531404" watchObservedRunningTime="2026-01-27 20:42:10.916451877 +0000 UTC m=+2075.624267603" Jan 27 20:42:11 crc kubenswrapper[4858]: I0127 20:42:11.050379 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qzhms"] Jan 27 20:42:11 crc kubenswrapper[4858]: I0127 20:42:11.061520 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qzhms"] Jan 27 20:42:12 crc kubenswrapper[4858]: I0127 20:42:12.084411 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95620ef2-3348-440f-b7f6-ddebaccc5f17" path="/var/lib/kubelet/pods/95620ef2-3348-440f-b7f6-ddebaccc5f17/volumes" Jan 27 20:42:13 crc kubenswrapper[4858]: I0127 20:42:13.925532 4858 generic.go:334] "Generic (PLEG): container finished" podID="2b5098a1-6553-43e0-80fb-aae744353a50" containerID="7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f" exitCode=0 Jan 27 20:42:13 crc kubenswrapper[4858]: I0127 20:42:13.925641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwhrn" event={"ID":"2b5098a1-6553-43e0-80fb-aae744353a50","Type":"ContainerDied","Data":"7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f"} Jan 27 20:42:14 crc kubenswrapper[4858]: I0127 20:42:14.952984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwhrn" event={"ID":"2b5098a1-6553-43e0-80fb-aae744353a50","Type":"ContainerStarted","Data":"82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459"} Jan 27 20:42:14 crc kubenswrapper[4858]: I0127 20:42:14.981313 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwhrn" podStartSLOduration=2.369266408 podStartE2EDuration="6.981286966s" podCreationTimestamp="2026-01-27 20:42:08 +0000 UTC" firstStartedPulling="2026-01-27 20:42:09.867054003 +0000 UTC m=+2074.574869709" lastFinishedPulling="2026-01-27 20:42:14.479074561 +0000 UTC m=+2079.186890267" observedRunningTime="2026-01-27 20:42:14.977039157 +0000 UTC m=+2079.684854883" watchObservedRunningTime="2026-01-27 20:42:14.981286966 +0000 UTC m=+2079.689102672" Jan 27 20:42:15 crc kubenswrapper[4858]: I0127 20:42:15.043908 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cw7wg"] Jan 27 20:42:15 crc kubenswrapper[4858]: I0127 20:42:15.058009 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cw7wg"] Jan 27 20:42:16 crc kubenswrapper[4858]: I0127 20:42:16.083242 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e945c5a3-9e91-4cde-923f-764261351ad1" path="/var/lib/kubelet/pods/e945c5a3-9e91-4cde-923f-764261351ad1/volumes" Jan 27 20:42:18 crc kubenswrapper[4858]: I0127 20:42:18.631041 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:18 crc kubenswrapper[4858]: I0127 20:42:18.631614 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:19 crc kubenswrapper[4858]: I0127 20:42:19.701762 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwhrn" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="registry-server" probeResult="failure" output=< Jan 27 20:42:19 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:42:19 crc kubenswrapper[4858]: > Jan 27 20:42:28 crc kubenswrapper[4858]: I0127 20:42:28.708925 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:28 crc kubenswrapper[4858]: I0127 20:42:28.767726 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:28 crc kubenswrapper[4858]: I0127 20:42:28.950112 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwhrn"] Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.109280 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cwhrn" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="registry-server" containerID="cri-o://82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459" gracePeriod=2 Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.576872 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.607897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glnqq\" (UniqueName: \"kubernetes.io/projected/2b5098a1-6553-43e0-80fb-aae744353a50-kube-api-access-glnqq\") pod \"2b5098a1-6553-43e0-80fb-aae744353a50\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.608367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-utilities\") pod \"2b5098a1-6553-43e0-80fb-aae744353a50\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.608537 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-catalog-content\") pod \"2b5098a1-6553-43e0-80fb-aae744353a50\" (UID: \"2b5098a1-6553-43e0-80fb-aae744353a50\") " Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.611441 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-utilities" (OuterVolumeSpecName: "utilities") pod "2b5098a1-6553-43e0-80fb-aae744353a50" (UID: "2b5098a1-6553-43e0-80fb-aae744353a50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.616073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5098a1-6553-43e0-80fb-aae744353a50-kube-api-access-glnqq" (OuterVolumeSpecName: "kube-api-access-glnqq") pod "2b5098a1-6553-43e0-80fb-aae744353a50" (UID: "2b5098a1-6553-43e0-80fb-aae744353a50"). InnerVolumeSpecName "kube-api-access-glnqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.714489 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.714914 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glnqq\" (UniqueName: \"kubernetes.io/projected/2b5098a1-6553-43e0-80fb-aae744353a50-kube-api-access-glnqq\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.775664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b5098a1-6553-43e0-80fb-aae744353a50" (UID: "2b5098a1-6553-43e0-80fb-aae744353a50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:42:30 crc kubenswrapper[4858]: I0127 20:42:30.817417 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b5098a1-6553-43e0-80fb-aae744353a50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.129924 4858 generic.go:334] "Generic (PLEG): container finished" podID="2b5098a1-6553-43e0-80fb-aae744353a50" containerID="82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459" exitCode=0 Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.129992 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwhrn" event={"ID":"2b5098a1-6553-43e0-80fb-aae744353a50","Type":"ContainerDied","Data":"82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459"} Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.130013 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwhrn" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.130063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwhrn" event={"ID":"2b5098a1-6553-43e0-80fb-aae744353a50","Type":"ContainerDied","Data":"288b992b30f2a611e2cf2d101de3f3dfea611261e8348ae42a36458a59fb5b9e"} Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.130105 4858 scope.go:117] "RemoveContainer" containerID="82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.195642 4858 scope.go:117] "RemoveContainer" containerID="7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.195647 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwhrn"] Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.216738 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cwhrn"] Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.232254 4858 scope.go:117] "RemoveContainer" containerID="1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.278607 4858 scope.go:117] "RemoveContainer" containerID="82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459" Jan 27 20:42:31 crc kubenswrapper[4858]: E0127 20:42:31.279447 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459\": container with ID starting with 82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459 not found: ID does not exist" containerID="82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.279517 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459"} err="failed to get container status \"82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459\": rpc error: code = NotFound desc = could not find container \"82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459\": container with ID starting with 82f97bf457839774d98d0a4b48af13df643a28c10e94f22cb2397179f99be459 not found: ID does not exist" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.279577 4858 scope.go:117] "RemoveContainer" containerID="7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f" Jan 27 20:42:31 crc kubenswrapper[4858]: E0127 20:42:31.280249 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f\": container with ID starting with 7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f not found: ID does not exist" containerID="7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.280320 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f"} err="failed to get container status \"7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f\": rpc error: code = NotFound desc = could not find container \"7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f\": container with ID starting with 7e95c47447832ddc9fd48fab571b2970615b3ed437de2225b672b91ea8d6c29f not found: ID does not exist" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.280371 4858 scope.go:117] "RemoveContainer" containerID="1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de" Jan 27 20:42:31 crc kubenswrapper[4858]: E0127 20:42:31.280831 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de\": container with ID starting with 1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de not found: ID does not exist" containerID="1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de" Jan 27 20:42:31 crc kubenswrapper[4858]: I0127 20:42:31.280867 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de"} err="failed to get container status \"1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de\": rpc error: code = NotFound desc = could not find container \"1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de\": container with ID starting with 1372f206f0c48333a3c28127b1a852a7bdda2a6dba60ad1e9ceecdd030be64de not found: ID does not exist" Jan 27 20:42:32 crc kubenswrapper[4858]: I0127 20:42:32.091975 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" path="/var/lib/kubelet/pods/2b5098a1-6553-43e0-80fb-aae744353a50/volumes" Jan 27 20:42:51 crc kubenswrapper[4858]: I0127 20:42:51.349150 4858 generic.go:334] "Generic (PLEG): container finished" podID="78e25299-cf17-451b-8f2f-d980ff184dac" containerID="60abd87362a0a1085d0825fe7ac662c38a1e70a2ddbd4c6a7d87d4be652f486e" exitCode=0 Jan 27 20:42:51 crc kubenswrapper[4858]: I0127 20:42:51.349273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" event={"ID":"78e25299-cf17-451b-8f2f-d980ff184dac","Type":"ContainerDied","Data":"60abd87362a0a1085d0825fe7ac662c38a1e70a2ddbd4c6a7d87d4be652f486e"} Jan 27 20:42:52 crc kubenswrapper[4858]: I0127 20:42:52.917521 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.047302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-inventory\") pod \"78e25299-cf17-451b-8f2f-d980ff184dac\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.047433 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/78e25299-cf17-451b-8f2f-d980ff184dac-kube-api-access-r9n2z\") pod \"78e25299-cf17-451b-8f2f-d980ff184dac\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.047747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-ssh-key-openstack-edpm-ipam\") pod \"78e25299-cf17-451b-8f2f-d980ff184dac\" (UID: \"78e25299-cf17-451b-8f2f-d980ff184dac\") " Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.054134 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e25299-cf17-451b-8f2f-d980ff184dac-kube-api-access-r9n2z" (OuterVolumeSpecName: "kube-api-access-r9n2z") pod "78e25299-cf17-451b-8f2f-d980ff184dac" (UID: "78e25299-cf17-451b-8f2f-d980ff184dac"). InnerVolumeSpecName "kube-api-access-r9n2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.082665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-inventory" (OuterVolumeSpecName: "inventory") pod "78e25299-cf17-451b-8f2f-d980ff184dac" (UID: "78e25299-cf17-451b-8f2f-d980ff184dac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.084923 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78e25299-cf17-451b-8f2f-d980ff184dac" (UID: "78e25299-cf17-451b-8f2f-d980ff184dac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.150294 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.150341 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78e25299-cf17-451b-8f2f-d980ff184dac-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.150353 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9n2z\" (UniqueName: \"kubernetes.io/projected/78e25299-cf17-451b-8f2f-d980ff184dac-kube-api-access-r9n2z\") on node \"crc\" DevicePath \"\"" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.372950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" event={"ID":"78e25299-cf17-451b-8f2f-d980ff184dac","Type":"ContainerDied","Data":"2eda9fa2fcab9a28493e8645579a64b990e489232bf1815acaf3197be9722e93"} Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.373002 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eda9fa2fcab9a28493e8645579a64b990e489232bf1815acaf3197be9722e93" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.373043 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lzndd" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.507916 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng"] Jan 27 20:42:53 crc kubenswrapper[4858]: E0127 20:42:53.508435 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="registry-server" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.508458 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="registry-server" Jan 27 20:42:53 crc kubenswrapper[4858]: E0127 20:42:53.508482 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="extract-utilities" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.508493 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="extract-utilities" Jan 27 20:42:53 crc kubenswrapper[4858]: E0127 20:42:53.508515 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e25299-cf17-451b-8f2f-d980ff184dac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.508525 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e25299-cf17-451b-8f2f-d980ff184dac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:42:53 crc kubenswrapper[4858]: E0127 20:42:53.508548 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="extract-content" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.508572 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="extract-content" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.508828 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e25299-cf17-451b-8f2f-d980ff184dac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.508852 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5098a1-6553-43e0-80fb-aae744353a50" containerName="registry-server" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.509749 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.517501 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.517534 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.517996 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.520953 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng"] Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.522350 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.561648 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54wgh\" (UniqueName: \"kubernetes.io/projected/70c1d5d9-384f-4155-b4bc-cdc9185090f0-kube-api-access-54wgh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.562024 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.562238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.664114 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.664514 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.664564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54wgh\" (UniqueName: \"kubernetes.io/projected/70c1d5d9-384f-4155-b4bc-cdc9185090f0-kube-api-access-54wgh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.668218 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.670081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.680587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54wgh\" (UniqueName: \"kubernetes.io/projected/70c1d5d9-384f-4155-b4bc-cdc9185090f0-kube-api-access-54wgh\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mbwng\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:53 crc kubenswrapper[4858]: I0127 20:42:53.828044 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:42:54 crc kubenswrapper[4858]: I0127 20:42:54.386724 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng"] Jan 27 20:42:55 crc kubenswrapper[4858]: I0127 20:42:55.054299 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27s5"] Jan 27 20:42:55 crc kubenswrapper[4858]: I0127 20:42:55.068178 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-q27s5"] Jan 27 20:42:55 crc kubenswrapper[4858]: I0127 20:42:55.392576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" event={"ID":"70c1d5d9-384f-4155-b4bc-cdc9185090f0","Type":"ContainerStarted","Data":"dfbb42dcea67b455d69435cbd6b70d7431d20b7da05bcf7df79671687525dd50"} Jan 27 20:42:55 crc kubenswrapper[4858]: I0127 20:42:55.392622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" event={"ID":"70c1d5d9-384f-4155-b4bc-cdc9185090f0","Type":"ContainerStarted","Data":"4b27258f283c1592b0d5b74c39cc464377935639d9fc1772351cc79c1316bcee"} Jan 27 20:42:55 crc kubenswrapper[4858]: I0127 20:42:55.429269 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" podStartSLOduration=2.00627675 podStartE2EDuration="2.429248172s" podCreationTimestamp="2026-01-27 20:42:53 +0000 UTC" firstStartedPulling="2026-01-27 20:42:54.400100196 +0000 UTC m=+2119.107915902" lastFinishedPulling="2026-01-27 20:42:54.823071618 +0000 UTC m=+2119.530887324" observedRunningTime="2026-01-27 20:42:55.425930899 +0000 UTC m=+2120.133746605" watchObservedRunningTime="2026-01-27 20:42:55.429248172 +0000 UTC m=+2120.137063878" Jan 27 20:42:55 crc kubenswrapper[4858]: I0127 20:42:55.793450 4858 scope.go:117] "RemoveContainer" containerID="de4a1f38a9aa7dd445aa71a0de50952e01bd1913dab33b5530d157608d8b2746" Jan 27 20:42:55 crc kubenswrapper[4858]: I0127 20:42:55.842022 4858 scope.go:117] "RemoveContainer" containerID="42b5c9a3d3c1a209123b95abf557fbb400dded326074f62802f9850e97be50d1" Jan 27 20:42:56 crc kubenswrapper[4858]: I0127 20:42:56.094943 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee8824c1-03d3-4583-808c-c308867369e5" path="/var/lib/kubelet/pods/ee8824c1-03d3-4583-808c-c308867369e5/volumes" Jan 27 20:43:50 crc kubenswrapper[4858]: I0127 20:43:50.205866 4858 generic.go:334] "Generic (PLEG): container finished" podID="70c1d5d9-384f-4155-b4bc-cdc9185090f0" containerID="dfbb42dcea67b455d69435cbd6b70d7431d20b7da05bcf7df79671687525dd50" exitCode=0 Jan 27 20:43:50 crc kubenswrapper[4858]: I0127 20:43:50.205908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" event={"ID":"70c1d5d9-384f-4155-b4bc-cdc9185090f0","Type":"ContainerDied","Data":"dfbb42dcea67b455d69435cbd6b70d7431d20b7da05bcf7df79671687525dd50"} Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.662967 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.818852 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-ssh-key-openstack-edpm-ipam\") pod \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.818958 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54wgh\" (UniqueName: \"kubernetes.io/projected/70c1d5d9-384f-4155-b4bc-cdc9185090f0-kube-api-access-54wgh\") pod \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.819863 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-inventory\") pod \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\" (UID: \"70c1d5d9-384f-4155-b4bc-cdc9185090f0\") " Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.825943 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c1d5d9-384f-4155-b4bc-cdc9185090f0-kube-api-access-54wgh" (OuterVolumeSpecName: "kube-api-access-54wgh") pod "70c1d5d9-384f-4155-b4bc-cdc9185090f0" (UID: "70c1d5d9-384f-4155-b4bc-cdc9185090f0"). InnerVolumeSpecName "kube-api-access-54wgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.857969 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-inventory" (OuterVolumeSpecName: "inventory") pod "70c1d5d9-384f-4155-b4bc-cdc9185090f0" (UID: "70c1d5d9-384f-4155-b4bc-cdc9185090f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.865421 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70c1d5d9-384f-4155-b4bc-cdc9185090f0" (UID: "70c1d5d9-384f-4155-b4bc-cdc9185090f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.923926 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.923970 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54wgh\" (UniqueName: \"kubernetes.io/projected/70c1d5d9-384f-4155-b4bc-cdc9185090f0-kube-api-access-54wgh\") on node \"crc\" DevicePath \"\"" Jan 27 20:43:51 crc kubenswrapper[4858]: I0127 20:43:51.923986 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c1d5d9-384f-4155-b4bc-cdc9185090f0-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.287990 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" event={"ID":"70c1d5d9-384f-4155-b4bc-cdc9185090f0","Type":"ContainerDied","Data":"4b27258f283c1592b0d5b74c39cc464377935639d9fc1772351cc79c1316bcee"} Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.288268 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b27258f283c1592b0d5b74c39cc464377935639d9fc1772351cc79c1316bcee" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.288381 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mbwng" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.433612 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rn685"] Jan 27 20:43:52 crc kubenswrapper[4858]: E0127 20:43:52.434095 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c1d5d9-384f-4155-b4bc-cdc9185090f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.434115 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c1d5d9-384f-4155-b4bc-cdc9185090f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.434289 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c1d5d9-384f-4155-b4bc-cdc9185090f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.435065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.445169 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.445353 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.448264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.448759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpb4f\" (UniqueName: \"kubernetes.io/projected/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-kube-api-access-bpb4f\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.448820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.463969 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.464039 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.474292 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rn685"] Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.612753 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpb4f\" (UniqueName: \"kubernetes.io/projected/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-kube-api-access-bpb4f\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.612816 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.612963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.618377 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.624133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.648553 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpb4f\" (UniqueName: \"kubernetes.io/projected/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-kube-api-access-bpb4f\") pod \"ssh-known-hosts-edpm-deployment-rn685\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:52 crc kubenswrapper[4858]: I0127 20:43:52.914185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:43:53 crc kubenswrapper[4858]: I0127 20:43:53.523861 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-rn685"] Jan 27 20:43:53 crc kubenswrapper[4858]: W0127 20:43:53.530774 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea0384d_7b36_4de2_8718_58c49d6a8ef8.slice/crio-64e9fb194b54e43d019cb3db0f0d666aac30907da1de5009363a91650c8f80dc WatchSource:0}: Error finding container 64e9fb194b54e43d019cb3db0f0d666aac30907da1de5009363a91650c8f80dc: Status 404 returned error can't find the container with id 64e9fb194b54e43d019cb3db0f0d666aac30907da1de5009363a91650c8f80dc Jan 27 20:43:54 crc kubenswrapper[4858]: I0127 20:43:54.309591 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" event={"ID":"7ea0384d-7b36-4de2-8718-58c49d6a8ef8","Type":"ContainerStarted","Data":"64e9fb194b54e43d019cb3db0f0d666aac30907da1de5009363a91650c8f80dc"} Jan 27 20:43:55 crc kubenswrapper[4858]: I0127 20:43:55.327226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" event={"ID":"7ea0384d-7b36-4de2-8718-58c49d6a8ef8","Type":"ContainerStarted","Data":"a6ba65091bf5b6148b5022a55b6215f1def4631bc406bac9cb2b5760ad1c51ad"} Jan 27 20:43:55 crc kubenswrapper[4858]: I0127 20:43:55.356626 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" podStartSLOduration=2.6917360390000002 podStartE2EDuration="3.356596358s" podCreationTimestamp="2026-01-27 20:43:52 +0000 UTC" firstStartedPulling="2026-01-27 20:43:53.532426278 +0000 UTC m=+2178.240241984" lastFinishedPulling="2026-01-27 20:43:54.197286597 +0000 UTC m=+2178.905102303" observedRunningTime="2026-01-27 20:43:55.348286756 +0000 UTC m=+2180.056102492" watchObservedRunningTime="2026-01-27 20:43:55.356596358 +0000 UTC m=+2180.064412094" Jan 27 20:43:55 crc kubenswrapper[4858]: I0127 20:43:55.975435 4858 scope.go:117] "RemoveContainer" containerID="e733cf2c2aa94c74cb74a00c484e07cd64a85c7443ea5a453ef88b73df4437e1" Jan 27 20:43:59 crc kubenswrapper[4858]: I0127 20:43:59.329311 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:43:59 crc kubenswrapper[4858]: I0127 20:43:59.329650 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:44:02 crc kubenswrapper[4858]: I0127 20:44:02.399824 4858 generic.go:334] "Generic (PLEG): container finished" podID="7ea0384d-7b36-4de2-8718-58c49d6a8ef8" containerID="a6ba65091bf5b6148b5022a55b6215f1def4631bc406bac9cb2b5760ad1c51ad" exitCode=0 Jan 27 20:44:02 crc kubenswrapper[4858]: I0127 20:44:02.399944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" event={"ID":"7ea0384d-7b36-4de2-8718-58c49d6a8ef8","Type":"ContainerDied","Data":"a6ba65091bf5b6148b5022a55b6215f1def4631bc406bac9cb2b5760ad1c51ad"} Jan 27 20:44:03 crc kubenswrapper[4858]: I0127 20:44:03.890063 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:44:03 crc kubenswrapper[4858]: I0127 20:44:03.972667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-ssh-key-openstack-edpm-ipam\") pod \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " Jan 27 20:44:03 crc kubenswrapper[4858]: I0127 20:44:03.972847 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-inventory-0\") pod \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " Jan 27 20:44:03 crc kubenswrapper[4858]: I0127 20:44:03.972888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpb4f\" (UniqueName: \"kubernetes.io/projected/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-kube-api-access-bpb4f\") pod \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\" (UID: \"7ea0384d-7b36-4de2-8718-58c49d6a8ef8\") " Jan 27 20:44:03 crc kubenswrapper[4858]: I0127 20:44:03.986614 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-kube-api-access-bpb4f" (OuterVolumeSpecName: "kube-api-access-bpb4f") pod "7ea0384d-7b36-4de2-8718-58c49d6a8ef8" (UID: "7ea0384d-7b36-4de2-8718-58c49d6a8ef8"). InnerVolumeSpecName "kube-api-access-bpb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.009456 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ea0384d-7b36-4de2-8718-58c49d6a8ef8" (UID: "7ea0384d-7b36-4de2-8718-58c49d6a8ef8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.014929 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "7ea0384d-7b36-4de2-8718-58c49d6a8ef8" (UID: "7ea0384d-7b36-4de2-8718-58c49d6a8ef8"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.075112 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.075492 4858 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.075507 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpb4f\" (UniqueName: \"kubernetes.io/projected/7ea0384d-7b36-4de2-8718-58c49d6a8ef8-kube-api-access-bpb4f\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.422985 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" event={"ID":"7ea0384d-7b36-4de2-8718-58c49d6a8ef8","Type":"ContainerDied","Data":"64e9fb194b54e43d019cb3db0f0d666aac30907da1de5009363a91650c8f80dc"} Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.423040 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64e9fb194b54e43d019cb3db0f0d666aac30907da1de5009363a91650c8f80dc" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.423054 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-rn685" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.504013 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l"] Jan 27 20:44:04 crc kubenswrapper[4858]: E0127 20:44:04.504501 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea0384d-7b36-4de2-8718-58c49d6a8ef8" containerName="ssh-known-hosts-edpm-deployment" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.504517 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea0384d-7b36-4de2-8718-58c49d6a8ef8" containerName="ssh-known-hosts-edpm-deployment" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.504728 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea0384d-7b36-4de2-8718-58c49d6a8ef8" containerName="ssh-known-hosts-edpm-deployment" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.505468 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.507508 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.507732 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.507852 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.508906 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.518317 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l"] Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.586892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6dlf\" (UniqueName: \"kubernetes.io/projected/c5817364-db24-4f51-b709-6ec41b069f0b-kube-api-access-r6dlf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.587010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.587038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.689238 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6dlf\" (UniqueName: \"kubernetes.io/projected/c5817364-db24-4f51-b709-6ec41b069f0b-kube-api-access-r6dlf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.689400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.689443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.695003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.695784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.714539 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6dlf\" (UniqueName: \"kubernetes.io/projected/c5817364-db24-4f51-b709-6ec41b069f0b-kube-api-access-r6dlf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ftc8l\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:04 crc kubenswrapper[4858]: I0127 20:44:04.831747 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:05 crc kubenswrapper[4858]: I0127 20:44:05.491212 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l"] Jan 27 20:44:05 crc kubenswrapper[4858]: W0127 20:44:05.494023 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5817364_db24_4f51_b709_6ec41b069f0b.slice/crio-26d9e1818f9d9c3548290c394d8b4410cd4aa810497e92924a324cac96250804 WatchSource:0}: Error finding container 26d9e1818f9d9c3548290c394d8b4410cd4aa810497e92924a324cac96250804: Status 404 returned error can't find the container with id 26d9e1818f9d9c3548290c394d8b4410cd4aa810497e92924a324cac96250804 Jan 27 20:44:05 crc kubenswrapper[4858]: I0127 20:44:05.497326 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:44:06 crc kubenswrapper[4858]: I0127 20:44:06.457617 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" event={"ID":"c5817364-db24-4f51-b709-6ec41b069f0b","Type":"ContainerStarted","Data":"26d9e1818f9d9c3548290c394d8b4410cd4aa810497e92924a324cac96250804"} Jan 27 20:44:07 crc kubenswrapper[4858]: I0127 20:44:07.477822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" event={"ID":"c5817364-db24-4f51-b709-6ec41b069f0b","Type":"ContainerStarted","Data":"863b52c08bfcb3fd61a072dbff6ef5f77fac90de823c6d0f302d9b9b717aca14"} Jan 27 20:44:07 crc kubenswrapper[4858]: I0127 20:44:07.502976 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" podStartSLOduration=2.24522171 podStartE2EDuration="3.502947058s" podCreationTimestamp="2026-01-27 20:44:04 +0000 UTC" firstStartedPulling="2026-01-27 20:44:05.497088744 +0000 UTC m=+2190.204904450" lastFinishedPulling="2026-01-27 20:44:06.754814052 +0000 UTC m=+2191.462629798" observedRunningTime="2026-01-27 20:44:07.500266229 +0000 UTC m=+2192.208081995" watchObservedRunningTime="2026-01-27 20:44:07.502947058 +0000 UTC m=+2192.210762764" Jan 27 20:44:16 crc kubenswrapper[4858]: I0127 20:44:16.582264 4858 generic.go:334] "Generic (PLEG): container finished" podID="c5817364-db24-4f51-b709-6ec41b069f0b" containerID="863b52c08bfcb3fd61a072dbff6ef5f77fac90de823c6d0f302d9b9b717aca14" exitCode=0 Jan 27 20:44:16 crc kubenswrapper[4858]: I0127 20:44:16.582385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" event={"ID":"c5817364-db24-4f51-b709-6ec41b069f0b","Type":"ContainerDied","Data":"863b52c08bfcb3fd61a072dbff6ef5f77fac90de823c6d0f302d9b9b717aca14"} Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.118830 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.198459 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6dlf\" (UniqueName: \"kubernetes.io/projected/c5817364-db24-4f51-b709-6ec41b069f0b-kube-api-access-r6dlf\") pod \"c5817364-db24-4f51-b709-6ec41b069f0b\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.198621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-ssh-key-openstack-edpm-ipam\") pod \"c5817364-db24-4f51-b709-6ec41b069f0b\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.198678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-inventory\") pod \"c5817364-db24-4f51-b709-6ec41b069f0b\" (UID: \"c5817364-db24-4f51-b709-6ec41b069f0b\") " Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.206755 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5817364-db24-4f51-b709-6ec41b069f0b-kube-api-access-r6dlf" (OuterVolumeSpecName: "kube-api-access-r6dlf") pod "c5817364-db24-4f51-b709-6ec41b069f0b" (UID: "c5817364-db24-4f51-b709-6ec41b069f0b"). InnerVolumeSpecName "kube-api-access-r6dlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.242819 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-inventory" (OuterVolumeSpecName: "inventory") pod "c5817364-db24-4f51-b709-6ec41b069f0b" (UID: "c5817364-db24-4f51-b709-6ec41b069f0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.247790 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c5817364-db24-4f51-b709-6ec41b069f0b" (UID: "c5817364-db24-4f51-b709-6ec41b069f0b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.301415 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6dlf\" (UniqueName: \"kubernetes.io/projected/c5817364-db24-4f51-b709-6ec41b069f0b-kube-api-access-r6dlf\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.301450 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.301462 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5817364-db24-4f51-b709-6ec41b069f0b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.602977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" event={"ID":"c5817364-db24-4f51-b709-6ec41b069f0b","Type":"ContainerDied","Data":"26d9e1818f9d9c3548290c394d8b4410cd4aa810497e92924a324cac96250804"} Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.603020 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9e1818f9d9c3548290c394d8b4410cd4aa810497e92924a324cac96250804" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.603078 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ftc8l" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.707765 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db"] Jan 27 20:44:18 crc kubenswrapper[4858]: E0127 20:44:18.708330 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5817364-db24-4f51-b709-6ec41b069f0b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.708347 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5817364-db24-4f51-b709-6ec41b069f0b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.708629 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5817364-db24-4f51-b709-6ec41b069f0b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.709669 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.712405 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.712426 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.712478 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.713243 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.719874 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db"] Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.810870 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxq4\" (UniqueName: \"kubernetes.io/projected/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-kube-api-access-kgxq4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.811269 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.811347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.913220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxq4\" (UniqueName: \"kubernetes.io/projected/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-kube-api-access-kgxq4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.913354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.913461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.924286 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.924293 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:18 crc kubenswrapper[4858]: I0127 20:44:18.932184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxq4\" (UniqueName: \"kubernetes.io/projected/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-kube-api-access-kgxq4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r79db\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:19 crc kubenswrapper[4858]: I0127 20:44:19.028770 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:19 crc kubenswrapper[4858]: I0127 20:44:19.584908 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db"] Jan 27 20:44:19 crc kubenswrapper[4858]: W0127 20:44:19.586589 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0534e5_d746_4c1e_93a0_9cd2b4f79271.slice/crio-eb473cc8603a928c0f685771898fa26cfe5a6070c8c9662b98c8c446fc6aeed9 WatchSource:0}: Error finding container eb473cc8603a928c0f685771898fa26cfe5a6070c8c9662b98c8c446fc6aeed9: Status 404 returned error can't find the container with id eb473cc8603a928c0f685771898fa26cfe5a6070c8c9662b98c8c446fc6aeed9 Jan 27 20:44:19 crc kubenswrapper[4858]: I0127 20:44:19.613435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" event={"ID":"7a0534e5-d746-4c1e-93a0-9cd2b4f79271","Type":"ContainerStarted","Data":"eb473cc8603a928c0f685771898fa26cfe5a6070c8c9662b98c8c446fc6aeed9"} Jan 27 20:44:20 crc kubenswrapper[4858]: I0127 20:44:20.626623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" event={"ID":"7a0534e5-d746-4c1e-93a0-9cd2b4f79271","Type":"ContainerStarted","Data":"5ef10b4902be740db5d837f8435f9a33152e9446f9f1127ba4a0e1cfe5f47b77"} Jan 27 20:44:20 crc kubenswrapper[4858]: I0127 20:44:20.649998 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" podStartSLOduration=1.995502923 podStartE2EDuration="2.649976245s" podCreationTimestamp="2026-01-27 20:44:18 +0000 UTC" firstStartedPulling="2026-01-27 20:44:19.592104185 +0000 UTC m=+2204.299919891" lastFinishedPulling="2026-01-27 20:44:20.246577507 +0000 UTC m=+2204.954393213" observedRunningTime="2026-01-27 20:44:20.642793731 +0000 UTC m=+2205.350609457" watchObservedRunningTime="2026-01-27 20:44:20.649976245 +0000 UTC m=+2205.357791951" Jan 27 20:44:29 crc kubenswrapper[4858]: I0127 20:44:29.329362 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:44:29 crc kubenswrapper[4858]: I0127 20:44:29.330072 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:44:30 crc kubenswrapper[4858]: I0127 20:44:30.729149 4858 generic.go:334] "Generic (PLEG): container finished" podID="7a0534e5-d746-4c1e-93a0-9cd2b4f79271" containerID="5ef10b4902be740db5d837f8435f9a33152e9446f9f1127ba4a0e1cfe5f47b77" exitCode=0 Jan 27 20:44:30 crc kubenswrapper[4858]: I0127 20:44:30.729221 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" event={"ID":"7a0534e5-d746-4c1e-93a0-9cd2b4f79271","Type":"ContainerDied","Data":"5ef10b4902be740db5d837f8435f9a33152e9446f9f1127ba4a0e1cfe5f47b77"} Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.284838 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.327718 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-inventory\") pod \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.327940 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgxq4\" (UniqueName: \"kubernetes.io/projected/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-kube-api-access-kgxq4\") pod \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.328095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-ssh-key-openstack-edpm-ipam\") pod \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\" (UID: \"7a0534e5-d746-4c1e-93a0-9cd2b4f79271\") " Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.333794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-kube-api-access-kgxq4" (OuterVolumeSpecName: "kube-api-access-kgxq4") pod "7a0534e5-d746-4c1e-93a0-9cd2b4f79271" (UID: "7a0534e5-d746-4c1e-93a0-9cd2b4f79271"). InnerVolumeSpecName "kube-api-access-kgxq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.364115 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-inventory" (OuterVolumeSpecName: "inventory") pod "7a0534e5-d746-4c1e-93a0-9cd2b4f79271" (UID: "7a0534e5-d746-4c1e-93a0-9cd2b4f79271"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.364422 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a0534e5-d746-4c1e-93a0-9cd2b4f79271" (UID: "7a0534e5-d746-4c1e-93a0-9cd2b4f79271"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.430901 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.430951 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.430968 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgxq4\" (UniqueName: \"kubernetes.io/projected/7a0534e5-d746-4c1e-93a0-9cd2b4f79271-kube-api-access-kgxq4\") on node \"crc\" DevicePath \"\"" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.749797 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" event={"ID":"7a0534e5-d746-4c1e-93a0-9cd2b4f79271","Type":"ContainerDied","Data":"eb473cc8603a928c0f685771898fa26cfe5a6070c8c9662b98c8c446fc6aeed9"} Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.749841 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb473cc8603a928c0f685771898fa26cfe5a6070c8c9662b98c8c446fc6aeed9" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.749853 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r79db" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.851992 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj"] Jan 27 20:44:32 crc kubenswrapper[4858]: E0127 20:44:32.852511 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0534e5-d746-4c1e-93a0-9cd2b4f79271" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.852533 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0534e5-d746-4c1e-93a0-9cd2b4f79271" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.852814 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0534e5-d746-4c1e-93a0-9cd2b4f79271" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.853629 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.856085 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.856304 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.856906 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.857008 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.857083 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.857165 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.859659 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.866154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.876707 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj"] Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.941813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.941863 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.941898 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.941955 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942063 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942096 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942141 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlffk\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-kube-api-access-zlffk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942540 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:32 crc kubenswrapper[4858]: I0127 20:44:32.942974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlffk\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-kube-api-access-zlffk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044839 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044862 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044926 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.044992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.045007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.045029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.045054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.045097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.045119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.045141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.050630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.051008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.051059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.051402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.051737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.052851 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.052916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.053134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.053768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.054236 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.055579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.062485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.063055 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlffk\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-kube-api-access-zlffk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.067209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d76mj\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.210776 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:44:33 crc kubenswrapper[4858]: I0127 20:44:33.806484 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj"] Jan 27 20:44:33 crc kubenswrapper[4858]: W0127 20:44:33.815074 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda05def4c_d0a6_4e87_8b26_8d72512941a2.slice/crio-374c9c3055edf4695683b19e47b982e3a120298ab4dba14d66aac093e10b6b38 WatchSource:0}: Error finding container 374c9c3055edf4695683b19e47b982e3a120298ab4dba14d66aac093e10b6b38: Status 404 returned error can't find the container with id 374c9c3055edf4695683b19e47b982e3a120298ab4dba14d66aac093e10b6b38 Jan 27 20:44:34 crc kubenswrapper[4858]: I0127 20:44:34.772227 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" event={"ID":"a05def4c-d0a6-4e87-8b26-8d72512941a2","Type":"ContainerStarted","Data":"b125d883936dc1a8a235d9d9633c44c90dd9e024b33a9f08a7c13d9be1ec747c"} Jan 27 20:44:34 crc kubenswrapper[4858]: I0127 20:44:34.772614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" event={"ID":"a05def4c-d0a6-4e87-8b26-8d72512941a2","Type":"ContainerStarted","Data":"374c9c3055edf4695683b19e47b982e3a120298ab4dba14d66aac093e10b6b38"} Jan 27 20:44:34 crc kubenswrapper[4858]: I0127 20:44:34.806334 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" podStartSLOduration=2.404393643 podStartE2EDuration="2.806304971s" podCreationTimestamp="2026-01-27 20:44:32 +0000 UTC" firstStartedPulling="2026-01-27 20:44:33.817813344 +0000 UTC m=+2218.525629050" lastFinishedPulling="2026-01-27 20:44:34.219724672 +0000 UTC m=+2218.927540378" observedRunningTime="2026-01-27 20:44:34.796414498 +0000 UTC m=+2219.504230234" watchObservedRunningTime="2026-01-27 20:44:34.806304971 +0000 UTC m=+2219.514120677" Jan 27 20:44:59 crc kubenswrapper[4858]: I0127 20:44:59.329206 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:44:59 crc kubenswrapper[4858]: I0127 20:44:59.329785 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:44:59 crc kubenswrapper[4858]: I0127 20:44:59.329831 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:44:59 crc kubenswrapper[4858]: I0127 20:44:59.330612 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:44:59 crc kubenswrapper[4858]: I0127 20:44:59.330674 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" gracePeriod=600 Jan 27 20:44:59 crc kubenswrapper[4858]: E0127 20:44:59.452778 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.039744 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" exitCode=0 Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.040038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710"} Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.040075 4858 scope.go:117] "RemoveContainer" containerID="336e4dbda5f2330cb97a3401d43a535416bd6411da7f0e5d5731c4398198a98c" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.040851 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:45:00 crc kubenswrapper[4858]: E0127 20:45:00.041151 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.214845 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d"] Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.221253 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.223845 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.224393 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.228744 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d"] Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.375514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ab60935-49aa-431d-80f1-59101d72d598-secret-volume\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.375706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4spww\" (UniqueName: \"kubernetes.io/projected/5ab60935-49aa-431d-80f1-59101d72d598-kube-api-access-4spww\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.375770 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ab60935-49aa-431d-80f1-59101d72d598-config-volume\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.478525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4spww\" (UniqueName: \"kubernetes.io/projected/5ab60935-49aa-431d-80f1-59101d72d598-kube-api-access-4spww\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.478688 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ab60935-49aa-431d-80f1-59101d72d598-config-volume\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.478887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ab60935-49aa-431d-80f1-59101d72d598-secret-volume\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.479930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ab60935-49aa-431d-80f1-59101d72d598-config-volume\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.488372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ab60935-49aa-431d-80f1-59101d72d598-secret-volume\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.498946 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4spww\" (UniqueName: \"kubernetes.io/projected/5ab60935-49aa-431d-80f1-59101d72d598-kube-api-access-4spww\") pod \"collect-profiles-29492445-8m49d\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:00 crc kubenswrapper[4858]: I0127 20:45:00.546280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:01 crc kubenswrapper[4858]: I0127 20:45:01.017317 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d"] Jan 27 20:45:01 crc kubenswrapper[4858]: I0127 20:45:01.052173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" event={"ID":"5ab60935-49aa-431d-80f1-59101d72d598","Type":"ContainerStarted","Data":"d861eacd00418e0780959e8259e8ea1967244ba9311643ef61f7fa53b9d5ca43"} Jan 27 20:45:02 crc kubenswrapper[4858]: I0127 20:45:02.065849 4858 generic.go:334] "Generic (PLEG): container finished" podID="5ab60935-49aa-431d-80f1-59101d72d598" containerID="68e09df089df7ffec73db64bd6882efe0ca038b02e11d81449c42e7808e42ed9" exitCode=0 Jan 27 20:45:02 crc kubenswrapper[4858]: I0127 20:45:02.065911 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" event={"ID":"5ab60935-49aa-431d-80f1-59101d72d598","Type":"ContainerDied","Data":"68e09df089df7ffec73db64bd6882efe0ca038b02e11d81449c42e7808e42ed9"} Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.410442 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.546310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ab60935-49aa-431d-80f1-59101d72d598-config-volume\") pod \"5ab60935-49aa-431d-80f1-59101d72d598\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.546471 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4spww\" (UniqueName: \"kubernetes.io/projected/5ab60935-49aa-431d-80f1-59101d72d598-kube-api-access-4spww\") pod \"5ab60935-49aa-431d-80f1-59101d72d598\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.546503 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ab60935-49aa-431d-80f1-59101d72d598-secret-volume\") pod \"5ab60935-49aa-431d-80f1-59101d72d598\" (UID: \"5ab60935-49aa-431d-80f1-59101d72d598\") " Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.547529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab60935-49aa-431d-80f1-59101d72d598-config-volume" (OuterVolumeSpecName: "config-volume") pod "5ab60935-49aa-431d-80f1-59101d72d598" (UID: "5ab60935-49aa-431d-80f1-59101d72d598"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.552662 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab60935-49aa-431d-80f1-59101d72d598-kube-api-access-4spww" (OuterVolumeSpecName: "kube-api-access-4spww") pod "5ab60935-49aa-431d-80f1-59101d72d598" (UID: "5ab60935-49aa-431d-80f1-59101d72d598"). InnerVolumeSpecName "kube-api-access-4spww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.555157 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab60935-49aa-431d-80f1-59101d72d598-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5ab60935-49aa-431d-80f1-59101d72d598" (UID: "5ab60935-49aa-431d-80f1-59101d72d598"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.648748 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4spww\" (UniqueName: \"kubernetes.io/projected/5ab60935-49aa-431d-80f1-59101d72d598-kube-api-access-4spww\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.648785 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ab60935-49aa-431d-80f1-59101d72d598-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:03 crc kubenswrapper[4858]: I0127 20:45:03.648795 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ab60935-49aa-431d-80f1-59101d72d598-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:04 crc kubenswrapper[4858]: I0127 20:45:04.115669 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" Jan 27 20:45:04 crc kubenswrapper[4858]: I0127 20:45:04.125876 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d" event={"ID":"5ab60935-49aa-431d-80f1-59101d72d598","Type":"ContainerDied","Data":"d861eacd00418e0780959e8259e8ea1967244ba9311643ef61f7fa53b9d5ca43"} Jan 27 20:45:04 crc kubenswrapper[4858]: I0127 20:45:04.125933 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d861eacd00418e0780959e8259e8ea1967244ba9311643ef61f7fa53b9d5ca43" Jan 27 20:45:04 crc kubenswrapper[4858]: I0127 20:45:04.481177 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5"] Jan 27 20:45:04 crc kubenswrapper[4858]: I0127 20:45:04.489811 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492400-mnbk5"] Jan 27 20:45:06 crc kubenswrapper[4858]: I0127 20:45:06.096005 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e828d5-7d0a-451f-98bd-05c2a2fcbea9" path="/var/lib/kubelet/pods/81e828d5-7d0a-451f-98bd-05c2a2fcbea9/volumes" Jan 27 20:45:13 crc kubenswrapper[4858]: I0127 20:45:13.084101 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:45:13 crc kubenswrapper[4858]: E0127 20:45:13.086161 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:45:16 crc kubenswrapper[4858]: I0127 20:45:16.285510 4858 generic.go:334] "Generic (PLEG): container finished" podID="a05def4c-d0a6-4e87-8b26-8d72512941a2" containerID="b125d883936dc1a8a235d9d9633c44c90dd9e024b33a9f08a7c13d9be1ec747c" exitCode=0 Jan 27 20:45:16 crc kubenswrapper[4858]: I0127 20:45:16.285734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" event={"ID":"a05def4c-d0a6-4e87-8b26-8d72512941a2","Type":"ContainerDied","Data":"b125d883936dc1a8a235d9d9633c44c90dd9e024b33a9f08a7c13d9be1ec747c"} Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.910635 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942536 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-nova-combined-ca-bundle\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-bootstrap-combined-ca-bundle\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942646 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ssh-key-openstack-edpm-ipam\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942666 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-libvirt-combined-ca-bundle\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942775 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ovn-combined-ca-bundle\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942800 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-repo-setup-combined-ca-bundle\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-inventory\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlffk\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-kube-api-access-zlffk\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.942982 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-neutron-metadata-combined-ca-bundle\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.943005 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-telemetry-combined-ca-bundle\") pod \"a05def4c-d0a6-4e87-8b26-8d72512941a2\" (UID: \"a05def4c-d0a6-4e87-8b26-8d72512941a2\") " Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.951986 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.952038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.952688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.964112 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-kube-api-access-zlffk" (OuterVolumeSpecName: "kube-api-access-zlffk") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "kube-api-access-zlffk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.964150 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.964639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.964833 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.966710 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.967030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.974846 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.980520 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.990454 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:17 crc kubenswrapper[4858]: I0127 20:45:17.999307 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-inventory" (OuterVolumeSpecName: "inventory") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.009795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a05def4c-d0a6-4e87-8b26-8d72512941a2" (UID: "a05def4c-d0a6-4e87-8b26-8d72512941a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045252 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045472 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045574 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlffk\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-kube-api-access-zlffk\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045674 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045756 4858 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045834 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045916 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.045994 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a05def4c-d0a6-4e87-8b26-8d72512941a2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.046078 4858 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.046163 4858 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.046236 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.046307 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.046378 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.046448 4858 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05def4c-d0a6-4e87-8b26-8d72512941a2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.312481 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.312390 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d76mj" event={"ID":"a05def4c-d0a6-4e87-8b26-8d72512941a2","Type":"ContainerDied","Data":"374c9c3055edf4695683b19e47b982e3a120298ab4dba14d66aac093e10b6b38"} Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.313021 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="374c9c3055edf4695683b19e47b982e3a120298ab4dba14d66aac093e10b6b38" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.419238 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9"] Jan 27 20:45:18 crc kubenswrapper[4858]: E0127 20:45:18.419749 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a05def4c-d0a6-4e87-8b26-8d72512941a2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.419775 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a05def4c-d0a6-4e87-8b26-8d72512941a2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 20:45:18 crc kubenswrapper[4858]: E0127 20:45:18.419816 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab60935-49aa-431d-80f1-59101d72d598" containerName="collect-profiles" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.419825 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab60935-49aa-431d-80f1-59101d72d598" containerName="collect-profiles" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.420060 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab60935-49aa-431d-80f1-59101d72d598" containerName="collect-profiles" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.420084 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a05def4c-d0a6-4e87-8b26-8d72512941a2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.420933 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.423412 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.423850 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.425245 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.425498 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.425647 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.453738 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9"] Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.456828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.456950 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.457192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.457383 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkljd\" (UniqueName: \"kubernetes.io/projected/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-kube-api-access-zkljd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.457536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.560139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.560223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkljd\" (UniqueName: \"kubernetes.io/projected/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-kube-api-access-zkljd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.560275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.560311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.560379 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.561331 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.565280 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.568480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.572427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.578455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkljd\" (UniqueName: \"kubernetes.io/projected/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-kube-api-access-zkljd\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xlcx9\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:18 crc kubenswrapper[4858]: I0127 20:45:18.741926 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:45:19 crc kubenswrapper[4858]: I0127 20:45:19.300116 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9"] Jan 27 20:45:19 crc kubenswrapper[4858]: W0127 20:45:19.301848 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e29e00b_0b7a_4415_a1b1_abd8aec81f9e.slice/crio-0f20ecb64888312610a5a6360a7e9beb3a6911553389891b56dd68c6055784a5 WatchSource:0}: Error finding container 0f20ecb64888312610a5a6360a7e9beb3a6911553389891b56dd68c6055784a5: Status 404 returned error can't find the container with id 0f20ecb64888312610a5a6360a7e9beb3a6911553389891b56dd68c6055784a5 Jan 27 20:45:19 crc kubenswrapper[4858]: I0127 20:45:19.325056 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" event={"ID":"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e","Type":"ContainerStarted","Data":"0f20ecb64888312610a5a6360a7e9beb3a6911553389891b56dd68c6055784a5"} Jan 27 20:45:20 crc kubenswrapper[4858]: I0127 20:45:20.361816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" event={"ID":"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e","Type":"ContainerStarted","Data":"584c9b98f62f70348fd68bdecd2770e2f5df2e8b7dc2b4bbf6e0010ceea1844e"} Jan 27 20:45:20 crc kubenswrapper[4858]: I0127 20:45:20.385481 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" podStartSLOduration=1.951418028 podStartE2EDuration="2.385457569s" podCreationTimestamp="2026-01-27 20:45:18 +0000 UTC" firstStartedPulling="2026-01-27 20:45:19.305195026 +0000 UTC m=+2264.013010752" lastFinishedPulling="2026-01-27 20:45:19.739234567 +0000 UTC m=+2264.447050293" observedRunningTime="2026-01-27 20:45:20.376532241 +0000 UTC m=+2265.084347987" watchObservedRunningTime="2026-01-27 20:45:20.385457569 +0000 UTC m=+2265.093273275" Jan 27 20:45:26 crc kubenswrapper[4858]: I0127 20:45:26.081215 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:45:26 crc kubenswrapper[4858]: E0127 20:45:26.082070 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:45:39 crc kubenswrapper[4858]: I0127 20:45:39.070869 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:45:39 crc kubenswrapper[4858]: E0127 20:45:39.071719 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:45:51 crc kubenswrapper[4858]: I0127 20:45:51.072034 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:45:51 crc kubenswrapper[4858]: E0127 20:45:51.073536 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:45:56 crc kubenswrapper[4858]: I0127 20:45:56.064133 4858 scope.go:117] "RemoveContainer" containerID="11fcc0d7678fc44e02df659beb90bb5a09d46d36b80f937358b0bbf14f1fd886" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.395003 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pplkd"] Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.398462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.413669 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pplkd"] Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.534455 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-utilities\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.534626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-catalog-content\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.534685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5j57\" (UniqueName: \"kubernetes.io/projected/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-kube-api-access-s5j57\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.637057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-catalog-content\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.637157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5j57\" (UniqueName: \"kubernetes.io/projected/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-kube-api-access-s5j57\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.637270 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-utilities\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.637655 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-catalog-content\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.637778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-utilities\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.667415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5j57\" (UniqueName: \"kubernetes.io/projected/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-kube-api-access-s5j57\") pod \"certified-operators-pplkd\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:02 crc kubenswrapper[4858]: I0127 20:46:02.744167 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:03 crc kubenswrapper[4858]: I0127 20:46:03.330214 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pplkd"] Jan 27 20:46:03 crc kubenswrapper[4858]: I0127 20:46:03.826052 4858 generic.go:334] "Generic (PLEG): container finished" podID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerID="163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e" exitCode=0 Jan 27 20:46:03 crc kubenswrapper[4858]: I0127 20:46:03.826165 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pplkd" event={"ID":"5532eaa9-ad05-4c5d-91b1-b917ca187ef2","Type":"ContainerDied","Data":"163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e"} Jan 27 20:46:03 crc kubenswrapper[4858]: I0127 20:46:03.826424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pplkd" event={"ID":"5532eaa9-ad05-4c5d-91b1-b917ca187ef2","Type":"ContainerStarted","Data":"9aa479a3823d05c0258620172b957056ff2fadaaa02fed618fc444c9d95ebd76"} Jan 27 20:46:04 crc kubenswrapper[4858]: I0127 20:46:04.840863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pplkd" event={"ID":"5532eaa9-ad05-4c5d-91b1-b917ca187ef2","Type":"ContainerStarted","Data":"42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a"} Jan 27 20:46:05 crc kubenswrapper[4858]: I0127 20:46:05.851644 4858 generic.go:334] "Generic (PLEG): container finished" podID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerID="42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a" exitCode=0 Jan 27 20:46:05 crc kubenswrapper[4858]: I0127 20:46:05.851731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pplkd" event={"ID":"5532eaa9-ad05-4c5d-91b1-b917ca187ef2","Type":"ContainerDied","Data":"42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a"} Jan 27 20:46:06 crc kubenswrapper[4858]: I0127 20:46:06.078236 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:46:06 crc kubenswrapper[4858]: E0127 20:46:06.078603 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:46:06 crc kubenswrapper[4858]: I0127 20:46:06.862582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pplkd" event={"ID":"5532eaa9-ad05-4c5d-91b1-b917ca187ef2","Type":"ContainerStarted","Data":"4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1"} Jan 27 20:46:06 crc kubenswrapper[4858]: I0127 20:46:06.886093 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pplkd" podStartSLOduration=2.22328795 podStartE2EDuration="4.886070925s" podCreationTimestamp="2026-01-27 20:46:02 +0000 UTC" firstStartedPulling="2026-01-27 20:46:03.82832836 +0000 UTC m=+2308.536144066" lastFinishedPulling="2026-01-27 20:46:06.491111335 +0000 UTC m=+2311.198927041" observedRunningTime="2026-01-27 20:46:06.881418662 +0000 UTC m=+2311.589234388" watchObservedRunningTime="2026-01-27 20:46:06.886070925 +0000 UTC m=+2311.593886631" Jan 27 20:46:12 crc kubenswrapper[4858]: I0127 20:46:12.745632 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:12 crc kubenswrapper[4858]: I0127 20:46:12.746315 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:12 crc kubenswrapper[4858]: I0127 20:46:12.788655 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:12 crc kubenswrapper[4858]: I0127 20:46:12.965176 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:13 crc kubenswrapper[4858]: I0127 20:46:13.025367 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pplkd"] Jan 27 20:46:14 crc kubenswrapper[4858]: I0127 20:46:14.930030 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pplkd" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="registry-server" containerID="cri-o://4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1" gracePeriod=2 Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.452360 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.626705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5j57\" (UniqueName: \"kubernetes.io/projected/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-kube-api-access-s5j57\") pod \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.626802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-utilities\") pod \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.626872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-catalog-content\") pod \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\" (UID: \"5532eaa9-ad05-4c5d-91b1-b917ca187ef2\") " Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.628762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-utilities" (OuterVolumeSpecName: "utilities") pod "5532eaa9-ad05-4c5d-91b1-b917ca187ef2" (UID: "5532eaa9-ad05-4c5d-91b1-b917ca187ef2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.634588 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-kube-api-access-s5j57" (OuterVolumeSpecName: "kube-api-access-s5j57") pod "5532eaa9-ad05-4c5d-91b1-b917ca187ef2" (UID: "5532eaa9-ad05-4c5d-91b1-b917ca187ef2"). InnerVolumeSpecName "kube-api-access-s5j57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.693568 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5532eaa9-ad05-4c5d-91b1-b917ca187ef2" (UID: "5532eaa9-ad05-4c5d-91b1-b917ca187ef2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.729670 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5j57\" (UniqueName: \"kubernetes.io/projected/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-kube-api-access-s5j57\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.729704 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.729714 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5532eaa9-ad05-4c5d-91b1-b917ca187ef2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.942583 4858 generic.go:334] "Generic (PLEG): container finished" podID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerID="4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1" exitCode=0 Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.942639 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pplkd" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.942638 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pplkd" event={"ID":"5532eaa9-ad05-4c5d-91b1-b917ca187ef2","Type":"ContainerDied","Data":"4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1"} Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.942804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pplkd" event={"ID":"5532eaa9-ad05-4c5d-91b1-b917ca187ef2","Type":"ContainerDied","Data":"9aa479a3823d05c0258620172b957056ff2fadaaa02fed618fc444c9d95ebd76"} Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.942847 4858 scope.go:117] "RemoveContainer" containerID="4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.975819 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pplkd"] Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.979518 4858 scope.go:117] "RemoveContainer" containerID="42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a" Jan 27 20:46:15 crc kubenswrapper[4858]: I0127 20:46:15.986369 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pplkd"] Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.011369 4858 scope.go:117] "RemoveContainer" containerID="163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e" Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.052560 4858 scope.go:117] "RemoveContainer" containerID="4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1" Jan 27 20:46:16 crc kubenswrapper[4858]: E0127 20:46:16.053380 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1\": container with ID starting with 4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1 not found: ID does not exist" containerID="4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1" Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.053414 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1"} err="failed to get container status \"4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1\": rpc error: code = NotFound desc = could not find container \"4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1\": container with ID starting with 4993ec3b5ad107cb56d78be31606624848df6404d65680e57e51fb8888c9a3a1 not found: ID does not exist" Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.053438 4858 scope.go:117] "RemoveContainer" containerID="42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a" Jan 27 20:46:16 crc kubenswrapper[4858]: E0127 20:46:16.053832 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a\": container with ID starting with 42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a not found: ID does not exist" containerID="42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a" Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.053894 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a"} err="failed to get container status \"42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a\": rpc error: code = NotFound desc = could not find container \"42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a\": container with ID starting with 42f67c761fc6d9fc9718809b7b698d6f92f2bff325db31b21530401a9a580a9a not found: ID does not exist" Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.053936 4858 scope.go:117] "RemoveContainer" containerID="163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e" Jan 27 20:46:16 crc kubenswrapper[4858]: E0127 20:46:16.054287 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e\": container with ID starting with 163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e not found: ID does not exist" containerID="163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e" Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.054313 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e"} err="failed to get container status \"163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e\": rpc error: code = NotFound desc = could not find container \"163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e\": container with ID starting with 163d55ef7dae1d2ef59e20fd892d6d4cc88ca84dc2aedf637bd974da6598a08e not found: ID does not exist" Jan 27 20:46:16 crc kubenswrapper[4858]: I0127 20:46:16.083152 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" path="/var/lib/kubelet/pods/5532eaa9-ad05-4c5d-91b1-b917ca187ef2/volumes" Jan 27 20:46:19 crc kubenswrapper[4858]: I0127 20:46:19.070722 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:46:19 crc kubenswrapper[4858]: E0127 20:46:19.071862 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:46:31 crc kubenswrapper[4858]: I0127 20:46:31.071511 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:46:31 crc kubenswrapper[4858]: E0127 20:46:31.072291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:46:31 crc kubenswrapper[4858]: I0127 20:46:31.089449 4858 generic.go:334] "Generic (PLEG): container finished" podID="1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" containerID="584c9b98f62f70348fd68bdecd2770e2f5df2e8b7dc2b4bbf6e0010ceea1844e" exitCode=0 Jan 27 20:46:31 crc kubenswrapper[4858]: I0127 20:46:31.089522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" event={"ID":"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e","Type":"ContainerDied","Data":"584c9b98f62f70348fd68bdecd2770e2f5df2e8b7dc2b4bbf6e0010ceea1844e"} Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.559406 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.619446 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovn-combined-ca-bundle\") pod \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.619577 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkljd\" (UniqueName: \"kubernetes.io/projected/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-kube-api-access-zkljd\") pod \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.619687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ssh-key-openstack-edpm-ipam\") pod \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.620940 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovncontroller-config-0\") pod \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.621065 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-inventory\") pod \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\" (UID: \"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e\") " Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.626214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" (UID: "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.645807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-kube-api-access-zkljd" (OuterVolumeSpecName: "kube-api-access-zkljd") pod "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" (UID: "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e"). InnerVolumeSpecName "kube-api-access-zkljd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.664138 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-inventory" (OuterVolumeSpecName: "inventory") pod "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" (UID: "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.666448 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" (UID: "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.667445 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" (UID: "1e29e00b-0b7a-4415-a1b1-abd8aec81f9e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.724443 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.724480 4858 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.724492 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.724502 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:32 crc kubenswrapper[4858]: I0127 20:46:32.724511 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkljd\" (UniqueName: \"kubernetes.io/projected/1e29e00b-0b7a-4415-a1b1-abd8aec81f9e-kube-api-access-zkljd\") on node \"crc\" DevicePath \"\"" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.116193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" event={"ID":"1e29e00b-0b7a-4415-a1b1-abd8aec81f9e","Type":"ContainerDied","Data":"0f20ecb64888312610a5a6360a7e9beb3a6911553389891b56dd68c6055784a5"} Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.116238 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xlcx9" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.116254 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f20ecb64888312610a5a6360a7e9beb3a6911553389891b56dd68c6055784a5" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.296708 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx"] Jan 27 20:46:33 crc kubenswrapper[4858]: E0127 20:46:33.297114 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="extract-content" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.297160 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="extract-content" Jan 27 20:46:33 crc kubenswrapper[4858]: E0127 20:46:33.297175 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.297183 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 20:46:33 crc kubenswrapper[4858]: E0127 20:46:33.297208 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="registry-server" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.297216 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="registry-server" Jan 27 20:46:33 crc kubenswrapper[4858]: E0127 20:46:33.297234 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="extract-utilities" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.297243 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="extract-utilities" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.297452 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5532eaa9-ad05-4c5d-91b1-b917ca187ef2" containerName="registry-server" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.297469 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e29e00b-0b7a-4415-a1b1-abd8aec81f9e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.298123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.300223 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.300816 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.300968 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.303926 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.304155 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.306902 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.315665 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx"] Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.340144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.340202 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.340235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.340317 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.340351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.340405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9cld\" (UniqueName: \"kubernetes.io/projected/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-kube-api-access-m9cld\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.441938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.442276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.442433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9cld\" (UniqueName: \"kubernetes.io/projected/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-kube-api-access-m9cld\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.443072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.443188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.443289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.448825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.449255 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.450975 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.453440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.455514 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.467635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9cld\" (UniqueName: \"kubernetes.io/projected/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-kube-api-access-m9cld\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:33 crc kubenswrapper[4858]: I0127 20:46:33.614720 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:46:34 crc kubenswrapper[4858]: I0127 20:46:34.175512 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx"] Jan 27 20:46:35 crc kubenswrapper[4858]: I0127 20:46:35.139746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" event={"ID":"d30a30e0-0d38-4abd-8fc2-71b1ddce069a","Type":"ContainerStarted","Data":"6cf97683ff3f15cc00b5eebc99cf8286439bb9f390423cbad9b3eb7b65999be2"} Jan 27 20:46:35 crc kubenswrapper[4858]: I0127 20:46:35.140612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" event={"ID":"d30a30e0-0d38-4abd-8fc2-71b1ddce069a","Type":"ContainerStarted","Data":"4b8c4a78b727389b7d824a88a4fcd05a6e0b8f327e9232787d7641bd55661d08"} Jan 27 20:46:35 crc kubenswrapper[4858]: I0127 20:46:35.173935 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" podStartSLOduration=1.735313399 podStartE2EDuration="2.173901797s" podCreationTimestamp="2026-01-27 20:46:33 +0000 UTC" firstStartedPulling="2026-01-27 20:46:34.17718258 +0000 UTC m=+2338.884998286" lastFinishedPulling="2026-01-27 20:46:34.615770968 +0000 UTC m=+2339.323586684" observedRunningTime="2026-01-27 20:46:35.160140644 +0000 UTC m=+2339.867956350" watchObservedRunningTime="2026-01-27 20:46:35.173901797 +0000 UTC m=+2339.881717533" Jan 27 20:46:45 crc kubenswrapper[4858]: I0127 20:46:45.071605 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:46:45 crc kubenswrapper[4858]: E0127 20:46:45.072816 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:46:58 crc kubenswrapper[4858]: I0127 20:46:58.075777 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:46:58 crc kubenswrapper[4858]: E0127 20:46:58.076384 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:47:11 crc kubenswrapper[4858]: I0127 20:47:11.072227 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:47:11 crc kubenswrapper[4858]: E0127 20:47:11.073388 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:47:22 crc kubenswrapper[4858]: I0127 20:47:22.071289 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:47:22 crc kubenswrapper[4858]: E0127 20:47:22.072454 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:47:31 crc kubenswrapper[4858]: I0127 20:47:31.741960 4858 generic.go:334] "Generic (PLEG): container finished" podID="d30a30e0-0d38-4abd-8fc2-71b1ddce069a" containerID="6cf97683ff3f15cc00b5eebc99cf8286439bb9f390423cbad9b3eb7b65999be2" exitCode=0 Jan 27 20:47:31 crc kubenswrapper[4858]: I0127 20:47:31.742097 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" event={"ID":"d30a30e0-0d38-4abd-8fc2-71b1ddce069a","Type":"ContainerDied","Data":"6cf97683ff3f15cc00b5eebc99cf8286439bb9f390423cbad9b3eb7b65999be2"} Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.073037 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:47:33 crc kubenswrapper[4858]: E0127 20:47:33.074432 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.287926 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.391193 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-inventory\") pod \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.391351 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-ssh-key-openstack-edpm-ipam\") pod \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.391417 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-nova-metadata-neutron-config-0\") pod \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.391462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.391511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-metadata-combined-ca-bundle\") pod \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.391875 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9cld\" (UniqueName: \"kubernetes.io/projected/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-kube-api-access-m9cld\") pod \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\" (UID: \"d30a30e0-0d38-4abd-8fc2-71b1ddce069a\") " Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.397973 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-kube-api-access-m9cld" (OuterVolumeSpecName: "kube-api-access-m9cld") pod "d30a30e0-0d38-4abd-8fc2-71b1ddce069a" (UID: "d30a30e0-0d38-4abd-8fc2-71b1ddce069a"). InnerVolumeSpecName "kube-api-access-m9cld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.400611 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d30a30e0-0d38-4abd-8fc2-71b1ddce069a" (UID: "d30a30e0-0d38-4abd-8fc2-71b1ddce069a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.428808 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d30a30e0-0d38-4abd-8fc2-71b1ddce069a" (UID: "d30a30e0-0d38-4abd-8fc2-71b1ddce069a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.433476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d30a30e0-0d38-4abd-8fc2-71b1ddce069a" (UID: "d30a30e0-0d38-4abd-8fc2-71b1ddce069a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.435800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-inventory" (OuterVolumeSpecName: "inventory") pod "d30a30e0-0d38-4abd-8fc2-71b1ddce069a" (UID: "d30a30e0-0d38-4abd-8fc2-71b1ddce069a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.440822 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d30a30e0-0d38-4abd-8fc2-71b1ddce069a" (UID: "d30a30e0-0d38-4abd-8fc2-71b1ddce069a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.494884 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9cld\" (UniqueName: \"kubernetes.io/projected/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-kube-api-access-m9cld\") on node \"crc\" DevicePath \"\"" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.494938 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.494950 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.494962 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.494975 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.494985 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30a30e0-0d38-4abd-8fc2-71b1ddce069a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.767055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" event={"ID":"d30a30e0-0d38-4abd-8fc2-71b1ddce069a","Type":"ContainerDied","Data":"4b8c4a78b727389b7d824a88a4fcd05a6e0b8f327e9232787d7641bd55661d08"} Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.767116 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b8c4a78b727389b7d824a88a4fcd05a6e0b8f327e9232787d7641bd55661d08" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.767138 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.952814 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d"] Jan 27 20:47:33 crc kubenswrapper[4858]: E0127 20:47:33.953793 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30a30e0-0d38-4abd-8fc2-71b1ddce069a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.953829 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30a30e0-0d38-4abd-8fc2-71b1ddce069a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.954253 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30a30e0-0d38-4abd-8fc2-71b1ddce069a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.955516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.958508 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.958821 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.959155 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.959751 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.959797 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:47:33 crc kubenswrapper[4858]: I0127 20:47:33.970478 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d"] Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.109368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.109519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.109705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.109761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cllbv\" (UniqueName: \"kubernetes.io/projected/3c49607c-dca5-4943-acbc-5c13058a99df-kube-api-access-cllbv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.109797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.211706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.211825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cllbv\" (UniqueName: \"kubernetes.io/projected/3c49607c-dca5-4943-acbc-5c13058a99df-kube-api-access-cllbv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.211872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.212119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.212231 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.233288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.233876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.235145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.235949 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.255182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cllbv\" (UniqueName: \"kubernetes.io/projected/3c49607c-dca5-4943-acbc-5c13058a99df-kube-api-access-cllbv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.276654 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:47:34 crc kubenswrapper[4858]: I0127 20:47:34.905069 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d"] Jan 27 20:47:35 crc kubenswrapper[4858]: I0127 20:47:35.797433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" event={"ID":"3c49607c-dca5-4943-acbc-5c13058a99df","Type":"ContainerStarted","Data":"c76cdfa633778bf524d40ed88e6d977e83e2c451ea5c647234c4a0dc559737cb"} Jan 27 20:47:35 crc kubenswrapper[4858]: I0127 20:47:35.797906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" event={"ID":"3c49607c-dca5-4943-acbc-5c13058a99df","Type":"ContainerStarted","Data":"1fcc7de2041efa19d881ab05f3358564c8d8bdbf982e9277deed852fb7dcbae2"} Jan 27 20:47:46 crc kubenswrapper[4858]: I0127 20:47:46.087402 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:47:46 crc kubenswrapper[4858]: E0127 20:47:46.088526 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:48:01 crc kubenswrapper[4858]: I0127 20:48:01.071702 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:48:01 crc kubenswrapper[4858]: E0127 20:48:01.072574 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:48:15 crc kubenswrapper[4858]: I0127 20:48:15.071277 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:48:15 crc kubenswrapper[4858]: E0127 20:48:15.072356 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:48:28 crc kubenswrapper[4858]: I0127 20:48:28.073000 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:48:28 crc kubenswrapper[4858]: E0127 20:48:28.073776 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:48:42 crc kubenswrapper[4858]: I0127 20:48:42.071652 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:48:42 crc kubenswrapper[4858]: E0127 20:48:42.072387 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:48:56 crc kubenswrapper[4858]: I0127 20:48:56.079463 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:48:56 crc kubenswrapper[4858]: E0127 20:48:56.080255 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:49:07 crc kubenswrapper[4858]: I0127 20:49:07.071670 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:49:07 crc kubenswrapper[4858]: E0127 20:49:07.072892 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:49:18 crc kubenswrapper[4858]: I0127 20:49:18.071648 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:49:18 crc kubenswrapper[4858]: E0127 20:49:18.072833 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:49:31 crc kubenswrapper[4858]: I0127 20:49:31.071205 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:49:31 crc kubenswrapper[4858]: E0127 20:49:31.071991 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:49:42 crc kubenswrapper[4858]: I0127 20:49:42.071930 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:49:42 crc kubenswrapper[4858]: E0127 20:49:42.073334 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:49:56 crc kubenswrapper[4858]: I0127 20:49:56.072919 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:49:56 crc kubenswrapper[4858]: E0127 20:49:56.073792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:50:10 crc kubenswrapper[4858]: I0127 20:50:10.071027 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:50:10 crc kubenswrapper[4858]: I0127 20:50:10.966889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"10d7cb8dfa175ff909d1ae286199f4ffd1e3c3decb1330855c0f465448bfbdbf"} Jan 27 20:50:10 crc kubenswrapper[4858]: I0127 20:50:10.995811 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" podStartSLOduration=157.576814191 podStartE2EDuration="2m37.995785264s" podCreationTimestamp="2026-01-27 20:47:33 +0000 UTC" firstStartedPulling="2026-01-27 20:47:34.911175743 +0000 UTC m=+2399.618991459" lastFinishedPulling="2026-01-27 20:47:35.330146786 +0000 UTC m=+2400.037962532" observedRunningTime="2026-01-27 20:47:35.823408032 +0000 UTC m=+2400.531223748" watchObservedRunningTime="2026-01-27 20:50:10.995785264 +0000 UTC m=+2555.703601010" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.166023 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gltfn"] Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.175023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.186159 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gltfn"] Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.290246 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-catalog-content\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.290370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5hd\" (UniqueName: \"kubernetes.io/projected/b1360349-819a-4012-bc63-0d669a438f5b-kube-api-access-6c5hd\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.290460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-utilities\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.392302 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5hd\" (UniqueName: \"kubernetes.io/projected/b1360349-819a-4012-bc63-0d669a438f5b-kube-api-access-6c5hd\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.392411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-utilities\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.392488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-catalog-content\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.392979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-utilities\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.393136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-catalog-content\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.436102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5hd\" (UniqueName: \"kubernetes.io/projected/b1360349-819a-4012-bc63-0d669a438f5b-kube-api-access-6c5hd\") pod \"community-operators-gltfn\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:08 crc kubenswrapper[4858]: I0127 20:51:08.503498 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:09 crc kubenswrapper[4858]: I0127 20:51:09.045739 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gltfn"] Jan 27 20:51:09 crc kubenswrapper[4858]: I0127 20:51:09.638405 4858 generic.go:334] "Generic (PLEG): container finished" podID="b1360349-819a-4012-bc63-0d669a438f5b" containerID="91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3" exitCode=0 Jan 27 20:51:09 crc kubenswrapper[4858]: I0127 20:51:09.638480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gltfn" event={"ID":"b1360349-819a-4012-bc63-0d669a438f5b","Type":"ContainerDied","Data":"91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3"} Jan 27 20:51:09 crc kubenswrapper[4858]: I0127 20:51:09.638770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gltfn" event={"ID":"b1360349-819a-4012-bc63-0d669a438f5b","Type":"ContainerStarted","Data":"0f3489a348584969c5dfa03e1cabb9f16be4e513ca084d2a767612562f120cd9"} Jan 27 20:51:09 crc kubenswrapper[4858]: I0127 20:51:09.641402 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:51:10 crc kubenswrapper[4858]: I0127 20:51:10.653474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gltfn" event={"ID":"b1360349-819a-4012-bc63-0d669a438f5b","Type":"ContainerStarted","Data":"2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f"} Jan 27 20:51:11 crc kubenswrapper[4858]: I0127 20:51:11.665209 4858 generic.go:334] "Generic (PLEG): container finished" podID="b1360349-819a-4012-bc63-0d669a438f5b" containerID="2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f" exitCode=0 Jan 27 20:51:11 crc kubenswrapper[4858]: I0127 20:51:11.665260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gltfn" event={"ID":"b1360349-819a-4012-bc63-0d669a438f5b","Type":"ContainerDied","Data":"2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f"} Jan 27 20:51:12 crc kubenswrapper[4858]: I0127 20:51:12.677980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gltfn" event={"ID":"b1360349-819a-4012-bc63-0d669a438f5b","Type":"ContainerStarted","Data":"b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325"} Jan 27 20:51:12 crc kubenswrapper[4858]: I0127 20:51:12.703580 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gltfn" podStartSLOduration=2.25620535 podStartE2EDuration="4.703560536s" podCreationTimestamp="2026-01-27 20:51:08 +0000 UTC" firstStartedPulling="2026-01-27 20:51:09.641186771 +0000 UTC m=+2614.349002477" lastFinishedPulling="2026-01-27 20:51:12.088541947 +0000 UTC m=+2616.796357663" observedRunningTime="2026-01-27 20:51:12.69799835 +0000 UTC m=+2617.405814086" watchObservedRunningTime="2026-01-27 20:51:12.703560536 +0000 UTC m=+2617.411376242" Jan 27 20:51:18 crc kubenswrapper[4858]: I0127 20:51:18.504075 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:18 crc kubenswrapper[4858]: I0127 20:51:18.506377 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:18 crc kubenswrapper[4858]: I0127 20:51:18.584470 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:18 crc kubenswrapper[4858]: I0127 20:51:18.786956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:18 crc kubenswrapper[4858]: I0127 20:51:18.837914 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gltfn"] Jan 27 20:51:20 crc kubenswrapper[4858]: I0127 20:51:20.749433 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gltfn" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="registry-server" containerID="cri-o://b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325" gracePeriod=2 Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.260820 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.401659 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-utilities\") pod \"b1360349-819a-4012-bc63-0d669a438f5b\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.401829 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c5hd\" (UniqueName: \"kubernetes.io/projected/b1360349-819a-4012-bc63-0d669a438f5b-kube-api-access-6c5hd\") pod \"b1360349-819a-4012-bc63-0d669a438f5b\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.401893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-catalog-content\") pod \"b1360349-819a-4012-bc63-0d669a438f5b\" (UID: \"b1360349-819a-4012-bc63-0d669a438f5b\") " Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.404642 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-utilities" (OuterVolumeSpecName: "utilities") pod "b1360349-819a-4012-bc63-0d669a438f5b" (UID: "b1360349-819a-4012-bc63-0d669a438f5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.408851 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1360349-819a-4012-bc63-0d669a438f5b-kube-api-access-6c5hd" (OuterVolumeSpecName: "kube-api-access-6c5hd") pod "b1360349-819a-4012-bc63-0d669a438f5b" (UID: "b1360349-819a-4012-bc63-0d669a438f5b"). InnerVolumeSpecName "kube-api-access-6c5hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.477725 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1360349-819a-4012-bc63-0d669a438f5b" (UID: "b1360349-819a-4012-bc63-0d669a438f5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.504098 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.504147 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c5hd\" (UniqueName: \"kubernetes.io/projected/b1360349-819a-4012-bc63-0d669a438f5b-kube-api-access-6c5hd\") on node \"crc\" DevicePath \"\"" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.504165 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1360349-819a-4012-bc63-0d669a438f5b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.762814 4858 generic.go:334] "Generic (PLEG): container finished" podID="b1360349-819a-4012-bc63-0d669a438f5b" containerID="b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325" exitCode=0 Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.762897 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gltfn" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.762914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gltfn" event={"ID":"b1360349-819a-4012-bc63-0d669a438f5b","Type":"ContainerDied","Data":"b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325"} Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.763322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gltfn" event={"ID":"b1360349-819a-4012-bc63-0d669a438f5b","Type":"ContainerDied","Data":"0f3489a348584969c5dfa03e1cabb9f16be4e513ca084d2a767612562f120cd9"} Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.763350 4858 scope.go:117] "RemoveContainer" containerID="b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.807355 4858 scope.go:117] "RemoveContainer" containerID="2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.809142 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gltfn"] Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.819718 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gltfn"] Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.845748 4858 scope.go:117] "RemoveContainer" containerID="91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.895084 4858 scope.go:117] "RemoveContainer" containerID="b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325" Jan 27 20:51:21 crc kubenswrapper[4858]: E0127 20:51:21.895751 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325\": container with ID starting with b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325 not found: ID does not exist" containerID="b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.895815 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325"} err="failed to get container status \"b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325\": rpc error: code = NotFound desc = could not find container \"b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325\": container with ID starting with b2016c71d2fc7435d08c3c4951dd69da56e61a9e2d9b322f66ddd2b3b31ed325 not found: ID does not exist" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.895957 4858 scope.go:117] "RemoveContainer" containerID="2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f" Jan 27 20:51:21 crc kubenswrapper[4858]: E0127 20:51:21.896579 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f\": container with ID starting with 2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f not found: ID does not exist" containerID="2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.896611 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f"} err="failed to get container status \"2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f\": rpc error: code = NotFound desc = could not find container \"2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f\": container with ID starting with 2a71f48c42c9d8ba864f765b1204c59433dd110388cdcab3abbe815d00fbb32f not found: ID does not exist" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.896633 4858 scope.go:117] "RemoveContainer" containerID="91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3" Jan 27 20:51:21 crc kubenswrapper[4858]: E0127 20:51:21.896896 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3\": container with ID starting with 91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3 not found: ID does not exist" containerID="91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3" Jan 27 20:51:21 crc kubenswrapper[4858]: I0127 20:51:21.896923 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3"} err="failed to get container status \"91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3\": rpc error: code = NotFound desc = could not find container \"91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3\": container with ID starting with 91ec9c196f41ae1d76c1792b71aca39c19c7407f2e255835fac0b7f36c0146e3 not found: ID does not exist" Jan 27 20:51:22 crc kubenswrapper[4858]: I0127 20:51:22.087264 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1360349-819a-4012-bc63-0d669a438f5b" path="/var/lib/kubelet/pods/b1360349-819a-4012-bc63-0d669a438f5b/volumes" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.472975 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q67f9"] Jan 27 20:52:15 crc kubenswrapper[4858]: E0127 20:52:15.474092 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="extract-content" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.474112 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="extract-content" Jan 27 20:52:15 crc kubenswrapper[4858]: E0127 20:52:15.474131 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="extract-utilities" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.474140 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="extract-utilities" Jan 27 20:52:15 crc kubenswrapper[4858]: E0127 20:52:15.474160 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="registry-server" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.474167 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="registry-server" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.474412 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1360349-819a-4012-bc63-0d669a438f5b" containerName="registry-server" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.476695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.483775 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q67f9"] Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.607954 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-utilities\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.608369 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86sm\" (UniqueName: \"kubernetes.io/projected/8ece34b1-068e-466a-8c68-185f58248609-kube-api-access-q86sm\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.608544 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-catalog-content\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.665696 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z4t5f"] Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.668868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.690339 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4t5f"] Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.710744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-utilities\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.710906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q86sm\" (UniqueName: \"kubernetes.io/projected/8ece34b1-068e-466a-8c68-185f58248609-kube-api-access-q86sm\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.710961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-catalog-content\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.711239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-utilities\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.711610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-catalog-content\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.732786 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q86sm\" (UniqueName: \"kubernetes.io/projected/8ece34b1-068e-466a-8c68-185f58248609-kube-api-access-q86sm\") pod \"redhat-marketplace-q67f9\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.813217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-catalog-content\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.813287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvcpf\" (UniqueName: \"kubernetes.io/projected/1ef25051-c70d-4ece-afa1-c9be4224e48c-kube-api-access-pvcpf\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.813844 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-utilities\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.816669 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.915765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-catalog-content\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.921623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvcpf\" (UniqueName: \"kubernetes.io/projected/1ef25051-c70d-4ece-afa1-c9be4224e48c-kube-api-access-pvcpf\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.922190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-utilities\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.923595 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-utilities\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.916397 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-catalog-content\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.962704 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvcpf\" (UniqueName: \"kubernetes.io/projected/1ef25051-c70d-4ece-afa1-c9be4224e48c-kube-api-access-pvcpf\") pod \"redhat-operators-z4t5f\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:15 crc kubenswrapper[4858]: I0127 20:52:15.993384 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:16 crc kubenswrapper[4858]: I0127 20:52:16.414699 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q67f9"] Jan 27 20:52:16 crc kubenswrapper[4858]: I0127 20:52:16.560058 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z4t5f"] Jan 27 20:52:17 crc kubenswrapper[4858]: I0127 20:52:17.397779 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ece34b1-068e-466a-8c68-185f58248609" containerID="a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d" exitCode=0 Jan 27 20:52:17 crc kubenswrapper[4858]: I0127 20:52:17.397862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q67f9" event={"ID":"8ece34b1-068e-466a-8c68-185f58248609","Type":"ContainerDied","Data":"a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d"} Jan 27 20:52:17 crc kubenswrapper[4858]: I0127 20:52:17.398157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q67f9" event={"ID":"8ece34b1-068e-466a-8c68-185f58248609","Type":"ContainerStarted","Data":"57402b4794077b6ccca4e2dc5d7536976f8be0f829262c5b3f0dc073dc1e4a02"} Jan 27 20:52:17 crc kubenswrapper[4858]: I0127 20:52:17.400505 4858 generic.go:334] "Generic (PLEG): container finished" podID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerID="99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d" exitCode=0 Jan 27 20:52:17 crc kubenswrapper[4858]: I0127 20:52:17.400633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4t5f" event={"ID":"1ef25051-c70d-4ece-afa1-c9be4224e48c","Type":"ContainerDied","Data":"99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d"} Jan 27 20:52:17 crc kubenswrapper[4858]: I0127 20:52:17.400689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4t5f" event={"ID":"1ef25051-c70d-4ece-afa1-c9be4224e48c","Type":"ContainerStarted","Data":"c71713c20f3187353e315e25dd9ae682db30ec2d0a9e7ea0197c3893f4f76b8a"} Jan 27 20:52:19 crc kubenswrapper[4858]: I0127 20:52:19.427527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4t5f" event={"ID":"1ef25051-c70d-4ece-afa1-c9be4224e48c","Type":"ContainerStarted","Data":"0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10"} Jan 27 20:52:19 crc kubenswrapper[4858]: I0127 20:52:19.429920 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ece34b1-068e-466a-8c68-185f58248609" containerID="b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b" exitCode=0 Jan 27 20:52:19 crc kubenswrapper[4858]: I0127 20:52:19.429955 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q67f9" event={"ID":"8ece34b1-068e-466a-8c68-185f58248609","Type":"ContainerDied","Data":"b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b"} Jan 27 20:52:20 crc kubenswrapper[4858]: I0127 20:52:20.441240 4858 generic.go:334] "Generic (PLEG): container finished" podID="3c49607c-dca5-4943-acbc-5c13058a99df" containerID="c76cdfa633778bf524d40ed88e6d977e83e2c451ea5c647234c4a0dc559737cb" exitCode=0 Jan 27 20:52:20 crc kubenswrapper[4858]: I0127 20:52:20.441473 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" event={"ID":"3c49607c-dca5-4943-acbc-5c13058a99df","Type":"ContainerDied","Data":"c76cdfa633778bf524d40ed88e6d977e83e2c451ea5c647234c4a0dc559737cb"} Jan 27 20:52:20 crc kubenswrapper[4858]: I0127 20:52:20.448442 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q67f9" event={"ID":"8ece34b1-068e-466a-8c68-185f58248609","Type":"ContainerStarted","Data":"8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256"} Jan 27 20:52:20 crc kubenswrapper[4858]: I0127 20:52:20.504604 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q67f9" podStartSLOduration=2.928580684 podStartE2EDuration="5.504571416s" podCreationTimestamp="2026-01-27 20:52:15 +0000 UTC" firstStartedPulling="2026-01-27 20:52:17.399849494 +0000 UTC m=+2682.107665200" lastFinishedPulling="2026-01-27 20:52:19.975840226 +0000 UTC m=+2684.683655932" observedRunningTime="2026-01-27 20:52:20.491739837 +0000 UTC m=+2685.199555563" watchObservedRunningTime="2026-01-27 20:52:20.504571416 +0000 UTC m=+2685.212387122" Jan 27 20:52:21 crc kubenswrapper[4858]: I0127 20:52:21.921418 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.074734 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-ssh-key-openstack-edpm-ipam\") pod \"3c49607c-dca5-4943-acbc-5c13058a99df\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.074807 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-inventory\") pod \"3c49607c-dca5-4943-acbc-5c13058a99df\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.074852 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cllbv\" (UniqueName: \"kubernetes.io/projected/3c49607c-dca5-4943-acbc-5c13058a99df-kube-api-access-cllbv\") pod \"3c49607c-dca5-4943-acbc-5c13058a99df\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.075926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-secret-0\") pod \"3c49607c-dca5-4943-acbc-5c13058a99df\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.075955 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-combined-ca-bundle\") pod \"3c49607c-dca5-4943-acbc-5c13058a99df\" (UID: \"3c49607c-dca5-4943-acbc-5c13058a99df\") " Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.085334 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c49607c-dca5-4943-acbc-5c13058a99df-kube-api-access-cllbv" (OuterVolumeSpecName: "kube-api-access-cllbv") pod "3c49607c-dca5-4943-acbc-5c13058a99df" (UID: "3c49607c-dca5-4943-acbc-5c13058a99df"). InnerVolumeSpecName "kube-api-access-cllbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.085452 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3c49607c-dca5-4943-acbc-5c13058a99df" (UID: "3c49607c-dca5-4943-acbc-5c13058a99df"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.112997 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "3c49607c-dca5-4943-acbc-5c13058a99df" (UID: "3c49607c-dca5-4943-acbc-5c13058a99df"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.113060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-inventory" (OuterVolumeSpecName: "inventory") pod "3c49607c-dca5-4943-acbc-5c13058a99df" (UID: "3c49607c-dca5-4943-acbc-5c13058a99df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.113461 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3c49607c-dca5-4943-acbc-5c13058a99df" (UID: "3c49607c-dca5-4943-acbc-5c13058a99df"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.178538 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.178598 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.178608 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.178617 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c49607c-dca5-4943-acbc-5c13058a99df-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.178626 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cllbv\" (UniqueName: \"kubernetes.io/projected/3c49607c-dca5-4943-acbc-5c13058a99df-kube-api-access-cllbv\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.464849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" event={"ID":"3c49607c-dca5-4943-acbc-5c13058a99df","Type":"ContainerDied","Data":"1fcc7de2041efa19d881ab05f3358564c8d8bdbf982e9277deed852fb7dcbae2"} Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.465218 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fcc7de2041efa19d881ab05f3358564c8d8bdbf982e9277deed852fb7dcbae2" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.464930 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.574420 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g"] Jan 27 20:52:22 crc kubenswrapper[4858]: E0127 20:52:22.574821 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c49607c-dca5-4943-acbc-5c13058a99df" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.574838 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c49607c-dca5-4943-acbc-5c13058a99df" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.575048 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c49607c-dca5-4943-acbc-5c13058a99df" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.576981 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.579890 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.580078 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.580140 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.582793 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.583320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.585433 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.585497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.595117 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g"] Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.706844 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.707037 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.707110 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.707206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.707333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.707450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.707604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4krc\" (UniqueName: \"kubernetes.io/projected/cee2f5ea-c848-418b-975f-ba255506d1ae-kube-api-access-k4krc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.708184 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.708429 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.811055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.811217 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.811253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.811309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.811391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.811642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.812156 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4krc\" (UniqueName: \"kubernetes.io/projected/cee2f5ea-c848-418b-975f-ba255506d1ae-kube-api-access-k4krc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.812225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.812771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.812856 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.817174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.817378 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.818251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.818611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.818866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.819409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.820046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:22 crc kubenswrapper[4858]: I0127 20:52:22.910014 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4krc\" (UniqueName: \"kubernetes.io/projected/cee2f5ea-c848-418b-975f-ba255506d1ae-kube-api-access-k4krc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-87t6g\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:23 crc kubenswrapper[4858]: I0127 20:52:23.196927 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:52:24 crc kubenswrapper[4858]: I0127 20:52:24.047633 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g"] Jan 27 20:52:24 crc kubenswrapper[4858]: I0127 20:52:24.493812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" event={"ID":"cee2f5ea-c848-418b-975f-ba255506d1ae","Type":"ContainerStarted","Data":"ad4e95de99612dbe74e6ec4f28c9fd0230af5a87302fb340334d89f503510743"} Jan 27 20:52:25 crc kubenswrapper[4858]: I0127 20:52:25.817657 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:25 crc kubenswrapper[4858]: I0127 20:52:25.818356 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:25 crc kubenswrapper[4858]: I0127 20:52:25.877278 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:26 crc kubenswrapper[4858]: I0127 20:52:26.515402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" event={"ID":"cee2f5ea-c848-418b-975f-ba255506d1ae","Type":"ContainerStarted","Data":"f73821fe101d527a5155e59857fd6db3c27a3aacdc2d3c0d9dfafc61510b8bda"} Jan 27 20:52:26 crc kubenswrapper[4858]: I0127 20:52:26.517853 4858 generic.go:334] "Generic (PLEG): container finished" podID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerID="0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10" exitCode=0 Jan 27 20:52:26 crc kubenswrapper[4858]: I0127 20:52:26.518673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4t5f" event={"ID":"1ef25051-c70d-4ece-afa1-c9be4224e48c","Type":"ContainerDied","Data":"0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10"} Jan 27 20:52:26 crc kubenswrapper[4858]: I0127 20:52:26.562663 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" podStartSLOduration=3.137364569 podStartE2EDuration="4.562643359s" podCreationTimestamp="2026-01-27 20:52:22 +0000 UTC" firstStartedPulling="2026-01-27 20:52:24.055130165 +0000 UTC m=+2688.762945891" lastFinishedPulling="2026-01-27 20:52:25.480408975 +0000 UTC m=+2690.188224681" observedRunningTime="2026-01-27 20:52:26.547916736 +0000 UTC m=+2691.255732462" watchObservedRunningTime="2026-01-27 20:52:26.562643359 +0000 UTC m=+2691.270459065" Jan 27 20:52:26 crc kubenswrapper[4858]: I0127 20:52:26.619788 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:27 crc kubenswrapper[4858]: I0127 20:52:27.128210 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q67f9"] Jan 27 20:52:27 crc kubenswrapper[4858]: I0127 20:52:27.531815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4t5f" event={"ID":"1ef25051-c70d-4ece-afa1-c9be4224e48c","Type":"ContainerStarted","Data":"d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce"} Jan 27 20:52:27 crc kubenswrapper[4858]: I0127 20:52:27.551754 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z4t5f" podStartSLOduration=2.98732861 podStartE2EDuration="12.551729291s" podCreationTimestamp="2026-01-27 20:52:15 +0000 UTC" firstStartedPulling="2026-01-27 20:52:17.402805117 +0000 UTC m=+2682.110620813" lastFinishedPulling="2026-01-27 20:52:26.967205788 +0000 UTC m=+2691.675021494" observedRunningTime="2026-01-27 20:52:27.550304782 +0000 UTC m=+2692.258120508" watchObservedRunningTime="2026-01-27 20:52:27.551729291 +0000 UTC m=+2692.259545007" Jan 27 20:52:28 crc kubenswrapper[4858]: I0127 20:52:28.541352 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q67f9" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="registry-server" containerID="cri-o://8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256" gracePeriod=2 Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.074943 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.263243 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q86sm\" (UniqueName: \"kubernetes.io/projected/8ece34b1-068e-466a-8c68-185f58248609-kube-api-access-q86sm\") pod \"8ece34b1-068e-466a-8c68-185f58248609\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.263386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-catalog-content\") pod \"8ece34b1-068e-466a-8c68-185f58248609\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.263414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-utilities\") pod \"8ece34b1-068e-466a-8c68-185f58248609\" (UID: \"8ece34b1-068e-466a-8c68-185f58248609\") " Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.264524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-utilities" (OuterVolumeSpecName: "utilities") pod "8ece34b1-068e-466a-8c68-185f58248609" (UID: "8ece34b1-068e-466a-8c68-185f58248609"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.266638 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.272110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ece34b1-068e-466a-8c68-185f58248609-kube-api-access-q86sm" (OuterVolumeSpecName: "kube-api-access-q86sm") pod "8ece34b1-068e-466a-8c68-185f58248609" (UID: "8ece34b1-068e-466a-8c68-185f58248609"). InnerVolumeSpecName "kube-api-access-q86sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.293824 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ece34b1-068e-466a-8c68-185f58248609" (UID: "8ece34b1-068e-466a-8c68-185f58248609"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.329254 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.329345 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.368450 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q86sm\" (UniqueName: \"kubernetes.io/projected/8ece34b1-068e-466a-8c68-185f58248609-kube-api-access-q86sm\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.368762 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece34b1-068e-466a-8c68-185f58248609-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.553697 4858 generic.go:334] "Generic (PLEG): container finished" podID="8ece34b1-068e-466a-8c68-185f58248609" containerID="8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256" exitCode=0 Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.553779 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q67f9" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.553815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q67f9" event={"ID":"8ece34b1-068e-466a-8c68-185f58248609","Type":"ContainerDied","Data":"8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256"} Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.553859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q67f9" event={"ID":"8ece34b1-068e-466a-8c68-185f58248609","Type":"ContainerDied","Data":"57402b4794077b6ccca4e2dc5d7536976f8be0f829262c5b3f0dc073dc1e4a02"} Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.553904 4858 scope.go:117] "RemoveContainer" containerID="8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.624641 4858 scope.go:117] "RemoveContainer" containerID="b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.673616 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q67f9"] Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.704159 4858 scope.go:117] "RemoveContainer" containerID="a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.738894 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q67f9"] Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.766760 4858 scope.go:117] "RemoveContainer" containerID="8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256" Jan 27 20:52:29 crc kubenswrapper[4858]: E0127 20:52:29.771851 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256\": container with ID starting with 8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256 not found: ID does not exist" containerID="8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.771908 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256"} err="failed to get container status \"8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256\": rpc error: code = NotFound desc = could not find container \"8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256\": container with ID starting with 8688f86d65f77e02633c8a658928c013e880c8c8116826299f39d74979c82256 not found: ID does not exist" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.771935 4858 scope.go:117] "RemoveContainer" containerID="b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b" Jan 27 20:52:29 crc kubenswrapper[4858]: E0127 20:52:29.775882 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b\": container with ID starting with b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b not found: ID does not exist" containerID="b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.775923 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b"} err="failed to get container status \"b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b\": rpc error: code = NotFound desc = could not find container \"b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b\": container with ID starting with b4d08f7efa53aadcfb6b3617a5a197d38ca9a25cd29bc627821750ec5b55a27b not found: ID does not exist" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.775943 4858 scope.go:117] "RemoveContainer" containerID="a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d" Jan 27 20:52:29 crc kubenswrapper[4858]: E0127 20:52:29.777773 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d\": container with ID starting with a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d not found: ID does not exist" containerID="a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d" Jan 27 20:52:29 crc kubenswrapper[4858]: I0127 20:52:29.777800 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d"} err="failed to get container status \"a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d\": rpc error: code = NotFound desc = could not find container \"a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d\": container with ID starting with a4b0c6e4224e19429f69d6385b26408ddecab1f62b5b9cd54aebed880868a14d not found: ID does not exist" Jan 27 20:52:30 crc kubenswrapper[4858]: I0127 20:52:30.094407 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ece34b1-068e-466a-8c68-185f58248609" path="/var/lib/kubelet/pods/8ece34b1-068e-466a-8c68-185f58248609/volumes" Jan 27 20:52:35 crc kubenswrapper[4858]: I0127 20:52:35.994581 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:35 crc kubenswrapper[4858]: I0127 20:52:35.995331 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:37 crc kubenswrapper[4858]: I0127 20:52:37.051013 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z4t5f" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="registry-server" probeResult="failure" output=< Jan 27 20:52:37 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 20:52:37 crc kubenswrapper[4858]: > Jan 27 20:52:46 crc kubenswrapper[4858]: I0127 20:52:46.067957 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:46 crc kubenswrapper[4858]: I0127 20:52:46.128512 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:46 crc kubenswrapper[4858]: I0127 20:52:46.672189 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4t5f"] Jan 27 20:52:47 crc kubenswrapper[4858]: I0127 20:52:47.753647 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z4t5f" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="registry-server" containerID="cri-o://d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce" gracePeriod=2 Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.229458 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.264022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-utilities\") pod \"1ef25051-c70d-4ece-afa1-c9be4224e48c\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.264132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvcpf\" (UniqueName: \"kubernetes.io/projected/1ef25051-c70d-4ece-afa1-c9be4224e48c-kube-api-access-pvcpf\") pod \"1ef25051-c70d-4ece-afa1-c9be4224e48c\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.264273 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-catalog-content\") pod \"1ef25051-c70d-4ece-afa1-c9be4224e48c\" (UID: \"1ef25051-c70d-4ece-afa1-c9be4224e48c\") " Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.264945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-utilities" (OuterVolumeSpecName: "utilities") pod "1ef25051-c70d-4ece-afa1-c9be4224e48c" (UID: "1ef25051-c70d-4ece-afa1-c9be4224e48c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.270708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef25051-c70d-4ece-afa1-c9be4224e48c-kube-api-access-pvcpf" (OuterVolumeSpecName: "kube-api-access-pvcpf") pod "1ef25051-c70d-4ece-afa1-c9be4224e48c" (UID: "1ef25051-c70d-4ece-afa1-c9be4224e48c"). InnerVolumeSpecName "kube-api-access-pvcpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.366758 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.366795 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvcpf\" (UniqueName: \"kubernetes.io/projected/1ef25051-c70d-4ece-afa1-c9be4224e48c-kube-api-access-pvcpf\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.410446 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ef25051-c70d-4ece-afa1-c9be4224e48c" (UID: "1ef25051-c70d-4ece-afa1-c9be4224e48c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.469519 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef25051-c70d-4ece-afa1-c9be4224e48c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.767211 4858 generic.go:334] "Generic (PLEG): container finished" podID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerID="d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce" exitCode=0 Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.767577 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4t5f" event={"ID":"1ef25051-c70d-4ece-afa1-c9be4224e48c","Type":"ContainerDied","Data":"d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce"} Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.767620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z4t5f" event={"ID":"1ef25051-c70d-4ece-afa1-c9be4224e48c","Type":"ContainerDied","Data":"c71713c20f3187353e315e25dd9ae682db30ec2d0a9e7ea0197c3893f4f76b8a"} Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.767648 4858 scope.go:117] "RemoveContainer" containerID="d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.767884 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z4t5f" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.808686 4858 scope.go:117] "RemoveContainer" containerID="0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.811533 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z4t5f"] Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.826063 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z4t5f"] Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.839670 4858 scope.go:117] "RemoveContainer" containerID="99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.882908 4858 scope.go:117] "RemoveContainer" containerID="d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce" Jan 27 20:52:48 crc kubenswrapper[4858]: E0127 20:52:48.883799 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce\": container with ID starting with d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce not found: ID does not exist" containerID="d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.883857 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce"} err="failed to get container status \"d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce\": rpc error: code = NotFound desc = could not find container \"d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce\": container with ID starting with d054a98d9c164016b185713010c6adea8437a797d2e4c716baa3ddcd3d2ce0ce not found: ID does not exist" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.883892 4858 scope.go:117] "RemoveContainer" containerID="0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10" Jan 27 20:52:48 crc kubenswrapper[4858]: E0127 20:52:48.884391 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10\": container with ID starting with 0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10 not found: ID does not exist" containerID="0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.884510 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10"} err="failed to get container status \"0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10\": rpc error: code = NotFound desc = could not find container \"0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10\": container with ID starting with 0ea53211106f1c6385650b877db93b6f72c861f8ef6cc8f56b3277cb6f153d10 not found: ID does not exist" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.884532 4858 scope.go:117] "RemoveContainer" containerID="99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d" Jan 27 20:52:48 crc kubenswrapper[4858]: E0127 20:52:48.885053 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d\": container with ID starting with 99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d not found: ID does not exist" containerID="99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d" Jan 27 20:52:48 crc kubenswrapper[4858]: I0127 20:52:48.885107 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d"} err="failed to get container status \"99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d\": rpc error: code = NotFound desc = could not find container \"99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d\": container with ID starting with 99dc596c0937a1714ce58bf4f9cb68ae4c4fecb2f7637b53947d94289bb7a42d not found: ID does not exist" Jan 27 20:52:50 crc kubenswrapper[4858]: I0127 20:52:50.085696 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" path="/var/lib/kubelet/pods/1ef25051-c70d-4ece-afa1-c9be4224e48c/volumes" Jan 27 20:52:59 crc kubenswrapper[4858]: I0127 20:52:59.328408 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:52:59 crc kubenswrapper[4858]: I0127 20:52:59.329116 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:53:29 crc kubenswrapper[4858]: I0127 20:53:29.328514 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:53:29 crc kubenswrapper[4858]: I0127 20:53:29.329125 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:53:29 crc kubenswrapper[4858]: I0127 20:53:29.329187 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:53:29 crc kubenswrapper[4858]: I0127 20:53:29.330103 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10d7cb8dfa175ff909d1ae286199f4ffd1e3c3decb1330855c0f465448bfbdbf"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:53:29 crc kubenswrapper[4858]: I0127 20:53:29.330169 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://10d7cb8dfa175ff909d1ae286199f4ffd1e3c3decb1330855c0f465448bfbdbf" gracePeriod=600 Jan 27 20:53:30 crc kubenswrapper[4858]: I0127 20:53:30.177763 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="10d7cb8dfa175ff909d1ae286199f4ffd1e3c3decb1330855c0f465448bfbdbf" exitCode=0 Jan 27 20:53:30 crc kubenswrapper[4858]: I0127 20:53:30.177850 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"10d7cb8dfa175ff909d1ae286199f4ffd1e3c3decb1330855c0f465448bfbdbf"} Jan 27 20:53:30 crc kubenswrapper[4858]: I0127 20:53:30.178172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9"} Jan 27 20:53:30 crc kubenswrapper[4858]: I0127 20:53:30.178191 4858 scope.go:117] "RemoveContainer" containerID="83e9e639980f016b3b8c8a76ea9f95fa29705d7dc2fd73604ab7e049aedd2710" Jan 27 20:54:59 crc kubenswrapper[4858]: I0127 20:54:59.127081 4858 generic.go:334] "Generic (PLEG): container finished" podID="cee2f5ea-c848-418b-975f-ba255506d1ae" containerID="f73821fe101d527a5155e59857fd6db3c27a3aacdc2d3c0d9dfafc61510b8bda" exitCode=0 Jan 27 20:54:59 crc kubenswrapper[4858]: I0127 20:54:59.127220 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" event={"ID":"cee2f5ea-c848-418b-975f-ba255506d1ae","Type":"ContainerDied","Data":"f73821fe101d527a5155e59857fd6db3c27a3aacdc2d3c0d9dfafc61510b8bda"} Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.603698 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-0\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684424 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-extra-config-0\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-inventory\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684498 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4krc\" (UniqueName: \"kubernetes.io/projected/cee2f5ea-c848-418b-975f-ba255506d1ae-kube-api-access-k4krc\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684537 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-combined-ca-bundle\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-1\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684650 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-0\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684683 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-1\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.684755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-ssh-key-openstack-edpm-ipam\") pod \"cee2f5ea-c848-418b-975f-ba255506d1ae\" (UID: \"cee2f5ea-c848-418b-975f-ba255506d1ae\") " Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.692155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee2f5ea-c848-418b-975f-ba255506d1ae-kube-api-access-k4krc" (OuterVolumeSpecName: "kube-api-access-k4krc") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "kube-api-access-k4krc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.701889 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.716839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.716871 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.716900 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.717374 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.718211 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.720910 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-inventory" (OuterVolumeSpecName: "inventory") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.724467 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "cee2f5ea-c848-418b-975f-ba255506d1ae" (UID: "cee2f5ea-c848-418b-975f-ba255506d1ae"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787406 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787436 4858 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787447 4858 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787459 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787468 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4krc\" (UniqueName: \"kubernetes.io/projected/cee2f5ea-c848-418b-975f-ba255506d1ae-kube-api-access-k4krc\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787477 4858 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787486 4858 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787494 4858 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:00 crc kubenswrapper[4858]: I0127 20:55:00.787504 4858 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cee2f5ea-c848-418b-975f-ba255506d1ae-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.150857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" event={"ID":"cee2f5ea-c848-418b-975f-ba255506d1ae","Type":"ContainerDied","Data":"ad4e95de99612dbe74e6ec4f28c9fd0230af5a87302fb340334d89f503510743"} Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.150904 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4e95de99612dbe74e6ec4f28c9fd0230af5a87302fb340334d89f503510743" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.150934 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-87t6g" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264296 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm"] Jan 27 20:55:01 crc kubenswrapper[4858]: E0127 20:55:01.264737 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="extract-utilities" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264757 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="extract-utilities" Jan 27 20:55:01 crc kubenswrapper[4858]: E0127 20:55:01.264790 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="registry-server" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264799 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="registry-server" Jan 27 20:55:01 crc kubenswrapper[4858]: E0127 20:55:01.264819 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="extract-content" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264827 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="extract-content" Jan 27 20:55:01 crc kubenswrapper[4858]: E0127 20:55:01.264847 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="extract-utilities" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264854 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="extract-utilities" Jan 27 20:55:01 crc kubenswrapper[4858]: E0127 20:55:01.264868 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="extract-content" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264874 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="extract-content" Jan 27 20:55:01 crc kubenswrapper[4858]: E0127 20:55:01.264893 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee2f5ea-c848-418b-975f-ba255506d1ae" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264899 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee2f5ea-c848-418b-975f-ba255506d1ae" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 20:55:01 crc kubenswrapper[4858]: E0127 20:55:01.264909 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="registry-server" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.264915 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="registry-server" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.266252 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ece34b1-068e-466a-8c68-185f58248609" containerName="registry-server" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.266279 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef25051-c70d-4ece-afa1-c9be4224e48c" containerName="registry-server" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.266293 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee2f5ea-c848-418b-975f-ba255506d1ae" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.267402 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.269701 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.269912 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.270073 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.270200 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.270341 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-4x4qb" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.277210 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm"] Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.296903 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.296998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.297094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.297183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.297252 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.297341 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.297680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmpvj\" (UniqueName: \"kubernetes.io/projected/9116e36c-794b-4e0c-ad98-58f8daa17fc1-kube-api-access-zmpvj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.399456 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.399522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.399580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.399679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmpvj\" (UniqueName: \"kubernetes.io/projected/9116e36c-794b-4e0c-ad98-58f8daa17fc1-kube-api-access-zmpvj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.399726 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.399749 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.399804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.405697 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.405814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.406191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.412058 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.412668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.414031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.422397 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmpvj\" (UniqueName: \"kubernetes.io/projected/9116e36c-794b-4e0c-ad98-58f8daa17fc1-kube-api-access-zmpvj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:01 crc kubenswrapper[4858]: I0127 20:55:01.585575 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:55:02 crc kubenswrapper[4858]: I0127 20:55:02.148231 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm"] Jan 27 20:55:02 crc kubenswrapper[4858]: I0127 20:55:02.164683 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" event={"ID":"9116e36c-794b-4e0c-ad98-58f8daa17fc1","Type":"ContainerStarted","Data":"9908d1cde3c1a698c5697f07228d587802ddf64f97c1226cbf7c03c7e769870b"} Jan 27 20:55:03 crc kubenswrapper[4858]: I0127 20:55:03.178202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" event={"ID":"9116e36c-794b-4e0c-ad98-58f8daa17fc1","Type":"ContainerStarted","Data":"69095b07172eacaffa41ecba1eb2a172fd2590df493f5f1c8187d6cbd8c4d195"} Jan 27 20:55:03 crc kubenswrapper[4858]: I0127 20:55:03.209373 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" podStartSLOduration=1.739385387 podStartE2EDuration="2.209354265s" podCreationTimestamp="2026-01-27 20:55:01 +0000 UTC" firstStartedPulling="2026-01-27 20:55:02.154927999 +0000 UTC m=+2846.862743715" lastFinishedPulling="2026-01-27 20:55:02.624896887 +0000 UTC m=+2847.332712593" observedRunningTime="2026-01-27 20:55:03.201259015 +0000 UTC m=+2847.909074731" watchObservedRunningTime="2026-01-27 20:55:03.209354265 +0000 UTC m=+2847.917169971" Jan 27 20:55:29 crc kubenswrapper[4858]: I0127 20:55:29.329113 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:55:29 crc kubenswrapper[4858]: I0127 20:55:29.329632 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:55:59 crc kubenswrapper[4858]: I0127 20:55:59.329097 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:55:59 crc kubenswrapper[4858]: I0127 20:55:59.329845 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:56:29 crc kubenswrapper[4858]: I0127 20:56:29.328487 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 20:56:29 crc kubenswrapper[4858]: I0127 20:56:29.329684 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 20:56:29 crc kubenswrapper[4858]: I0127 20:56:29.329754 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 20:56:29 crc kubenswrapper[4858]: I0127 20:56:29.330378 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 20:56:29 crc kubenswrapper[4858]: I0127 20:56:29.330438 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" gracePeriod=600 Jan 27 20:56:29 crc kubenswrapper[4858]: E0127 20:56:29.458357 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:56:30 crc kubenswrapper[4858]: I0127 20:56:30.300126 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" exitCode=0 Jan 27 20:56:30 crc kubenswrapper[4858]: I0127 20:56:30.300391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9"} Jan 27 20:56:30 crc kubenswrapper[4858]: I0127 20:56:30.300470 4858 scope.go:117] "RemoveContainer" containerID="10d7cb8dfa175ff909d1ae286199f4ffd1e3c3decb1330855c0f465448bfbdbf" Jan 27 20:56:30 crc kubenswrapper[4858]: I0127 20:56:30.301493 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:56:30 crc kubenswrapper[4858]: E0127 20:56:30.302002 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:56:41 crc kubenswrapper[4858]: I0127 20:56:41.070910 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:56:41 crc kubenswrapper[4858]: E0127 20:56:41.071657 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:56:56 crc kubenswrapper[4858]: I0127 20:56:56.077593 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:56:56 crc kubenswrapper[4858]: E0127 20:56:56.078182 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:57:09 crc kubenswrapper[4858]: I0127 20:57:09.073930 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:57:09 crc kubenswrapper[4858]: E0127 20:57:09.076168 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:57:22 crc kubenswrapper[4858]: I0127 20:57:22.014962 4858 generic.go:334] "Generic (PLEG): container finished" podID="9116e36c-794b-4e0c-ad98-58f8daa17fc1" containerID="69095b07172eacaffa41ecba1eb2a172fd2590df493f5f1c8187d6cbd8c4d195" exitCode=0 Jan 27 20:57:22 crc kubenswrapper[4858]: I0127 20:57:22.015079 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" event={"ID":"9116e36c-794b-4e0c-ad98-58f8daa17fc1","Type":"ContainerDied","Data":"69095b07172eacaffa41ecba1eb2a172fd2590df493f5f1c8187d6cbd8c4d195"} Jan 27 20:57:22 crc kubenswrapper[4858]: I0127 20:57:22.072383 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:57:22 crc kubenswrapper[4858]: E0127 20:57:22.072732 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.462971 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.554898 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-telemetry-combined-ca-bundle\") pod \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.555212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-0\") pod \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.555268 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-inventory\") pod \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.555310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ssh-key-openstack-edpm-ipam\") pod \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.555348 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmpvj\" (UniqueName: \"kubernetes.io/projected/9116e36c-794b-4e0c-ad98-58f8daa17fc1-kube-api-access-zmpvj\") pod \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.555368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-2\") pod \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.555423 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-1\") pod \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\" (UID: \"9116e36c-794b-4e0c-ad98-58f8daa17fc1\") " Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.561915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9116e36c-794b-4e0c-ad98-58f8daa17fc1" (UID: "9116e36c-794b-4e0c-ad98-58f8daa17fc1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.564195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9116e36c-794b-4e0c-ad98-58f8daa17fc1-kube-api-access-zmpvj" (OuterVolumeSpecName: "kube-api-access-zmpvj") pod "9116e36c-794b-4e0c-ad98-58f8daa17fc1" (UID: "9116e36c-794b-4e0c-ad98-58f8daa17fc1"). InnerVolumeSpecName "kube-api-access-zmpvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.594319 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9116e36c-794b-4e0c-ad98-58f8daa17fc1" (UID: "9116e36c-794b-4e0c-ad98-58f8daa17fc1"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.598808 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9116e36c-794b-4e0c-ad98-58f8daa17fc1" (UID: "9116e36c-794b-4e0c-ad98-58f8daa17fc1"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.600015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-inventory" (OuterVolumeSpecName: "inventory") pod "9116e36c-794b-4e0c-ad98-58f8daa17fc1" (UID: "9116e36c-794b-4e0c-ad98-58f8daa17fc1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.602294 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9116e36c-794b-4e0c-ad98-58f8daa17fc1" (UID: "9116e36c-794b-4e0c-ad98-58f8daa17fc1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.624353 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9116e36c-794b-4e0c-ad98-58f8daa17fc1" (UID: "9116e36c-794b-4e0c-ad98-58f8daa17fc1"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.658181 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.658215 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmpvj\" (UniqueName: \"kubernetes.io/projected/9116e36c-794b-4e0c-ad98-58f8daa17fc1-kube-api-access-zmpvj\") on node \"crc\" DevicePath \"\"" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.658228 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.658240 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.658252 4858 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.658264 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:57:23 crc kubenswrapper[4858]: I0127 20:57:23.658275 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9116e36c-794b-4e0c-ad98-58f8daa17fc1-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 20:57:24 crc kubenswrapper[4858]: I0127 20:57:24.041311 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" event={"ID":"9116e36c-794b-4e0c-ad98-58f8daa17fc1","Type":"ContainerDied","Data":"9908d1cde3c1a698c5697f07228d587802ddf64f97c1226cbf7c03c7e769870b"} Jan 27 20:57:24 crc kubenswrapper[4858]: I0127 20:57:24.041362 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9908d1cde3c1a698c5697f07228d587802ddf64f97c1226cbf7c03c7e769870b" Jan 27 20:57:24 crc kubenswrapper[4858]: I0127 20:57:24.041421 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm" Jan 27 20:57:34 crc kubenswrapper[4858]: I0127 20:57:34.072273 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:57:34 crc kubenswrapper[4858]: E0127 20:57:34.073253 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:57:47 crc kubenswrapper[4858]: I0127 20:57:47.071325 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:57:47 crc kubenswrapper[4858]: E0127 20:57:47.072450 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:58:02 crc kubenswrapper[4858]: I0127 20:58:02.071335 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:58:02 crc kubenswrapper[4858]: E0127 20:58:02.073407 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.678750 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 27 20:58:03 crc kubenswrapper[4858]: E0127 20:58:03.679516 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9116e36c-794b-4e0c-ad98-58f8daa17fc1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.679537 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9116e36c-794b-4e0c-ad98-58f8daa17fc1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.679847 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9116e36c-794b-4e0c-ad98-58f8daa17fc1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.681120 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.688877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.736393 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.789461 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.791341 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.795758 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.813873 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-run\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.813939 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814520 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-sys\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-scripts\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw54p\" (UniqueName: \"kubernetes.io/projected/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-kube-api-access-gw54p\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814803 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-lib-modules\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814841 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814866 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814905 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-config-data\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.814994 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.815021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-dev\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.816191 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.835540 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.837708 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.841381 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.847951 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzzk8\" (UniqueName: \"kubernetes.io/projected/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-kube-api-access-lzzk8\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-scripts\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw54p\" (UniqueName: \"kubernetes.io/projected/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-kube-api-access-gw54p\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-run\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916505 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916534 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-lib-modules\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916761 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-lib-modules\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916953 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8pwr\" (UniqueName: \"kubernetes.io/projected/292ec3c5-71af-43c3-8bee-e815c3876637-kube-api-access-r8pwr\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.916981 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917194 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917272 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917299 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-config-data\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-dev\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917583 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-run\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-dev\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-sys\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-dev\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917844 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917965 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.917987 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-sys\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918435 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-run\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918617 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.918667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-sys\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.923803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.924648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-config-data\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.926149 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.933150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-scripts\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:03 crc kubenswrapper[4858]: I0127 20:58:03.944402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw54p\" (UniqueName: \"kubernetes.io/projected/2106fe3e-bec7-4072-ba21-4f55b4a1b37a-kube-api-access-gw54p\") pod \"cinder-backup-0\" (UID: \"2106fe3e-bec7-4072-ba21-4f55b4a1b37a\") " pod="openstack/cinder-backup-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.019962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020076 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-dev\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-sys\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-sys\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020192 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-dev\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020265 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020266 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020302 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020303 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020326 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020375 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020454 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020486 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020692 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzzk8\" (UniqueName: \"kubernetes.io/projected/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-kube-api-access-lzzk8\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-run\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020828 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020836 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020871 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020914 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-run\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.020994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8pwr\" (UniqueName: \"kubernetes.io/projected/292ec3c5-71af-43c3-8bee-e815c3876637-kube-api-access-r8pwr\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021179 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021500 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.022722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/292ec3c5-71af-43c3-8bee-e815c3876637-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.021179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.025029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.025409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.026007 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.025882 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.028281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.028334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.029951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.038026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/292ec3c5-71af-43c3-8bee-e815c3876637-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.039127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8pwr\" (UniqueName: \"kubernetes.io/projected/292ec3c5-71af-43c3-8bee-e815c3876637-kube-api-access-r8pwr\") pod \"cinder-volume-nfs-0\" (UID: \"292ec3c5-71af-43c3-8bee-e815c3876637\") " pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.039149 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.042785 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzzk8\" (UniqueName: \"kubernetes.io/projected/9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d-kube-api-access-lzzk8\") pod \"cinder-volume-nfs-2-0\" (UID: \"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d\") " pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.114110 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.167055 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.672488 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.681090 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.798730 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 27 20:58:04 crc kubenswrapper[4858]: I0127 20:58:04.899362 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 27 20:58:05 crc kubenswrapper[4858]: I0127 20:58:05.441976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"292ec3c5-71af-43c3-8bee-e815c3876637","Type":"ContainerStarted","Data":"ab9c8c38cb240f84297787bdfc9547bc9ceb94e91a036293758dc007aa5c3027"} Jan 27 20:58:05 crc kubenswrapper[4858]: I0127 20:58:05.450325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2106fe3e-bec7-4072-ba21-4f55b4a1b37a","Type":"ContainerStarted","Data":"b251baf7a6ac0798b1eecb8765c5442539587374e60dd8b71992d9eb55a48aec"} Jan 27 20:58:05 crc kubenswrapper[4858]: I0127 20:58:05.453903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d","Type":"ContainerStarted","Data":"95dfb95de1d7a8d4926e2aae1fbad5b5e5ae3d0a87c26085438b4cf0ba520762"} Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.463786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2106fe3e-bec7-4072-ba21-4f55b4a1b37a","Type":"ContainerStarted","Data":"d426770aa061afd6b9f2e60d9e3edd57a2719894d27ebe696bd65c04ec19fb74"} Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.464376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2106fe3e-bec7-4072-ba21-4f55b4a1b37a","Type":"ContainerStarted","Data":"a1cad3e77325edd449099991902e3b6e6075380b19e23d63493bc908a417359a"} Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.466132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d","Type":"ContainerStarted","Data":"7aee163fa82f63542de54b86546360c7604d6bed4c5012999f4dc7dc3c47d62d"} Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.466255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d","Type":"ContainerStarted","Data":"2b14c1e382d477fb716e5b0687059b2ec223dc821a12338acfb1aedcd4c3d595"} Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.468064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"292ec3c5-71af-43c3-8bee-e815c3876637","Type":"ContainerStarted","Data":"1f0c7291bc25efd16fc65301acb6d9a22158939888ee16388c4d13e631629832"} Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.468177 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"292ec3c5-71af-43c3-8bee-e815c3876637","Type":"ContainerStarted","Data":"b643574c27c5ccc687dcc4cb6e9a94cb4ca2d4a3ce1a7d70a40ddbd2dfc5ea3a"} Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.492242 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.270563361 podStartE2EDuration="3.492221195s" podCreationTimestamp="2026-01-27 20:58:03 +0000 UTC" firstStartedPulling="2026-01-27 20:58:04.680845386 +0000 UTC m=+3029.388661092" lastFinishedPulling="2026-01-27 20:58:04.90250322 +0000 UTC m=+3029.610318926" observedRunningTime="2026-01-27 20:58:06.487126 +0000 UTC m=+3031.194941716" watchObservedRunningTime="2026-01-27 20:58:06.492221195 +0000 UTC m=+3031.200036901" Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.514664 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.26240655 podStartE2EDuration="3.514624691s" podCreationTimestamp="2026-01-27 20:58:03 +0000 UTC" firstStartedPulling="2026-01-27 20:58:04.901217753 +0000 UTC m=+3029.609033459" lastFinishedPulling="2026-01-27 20:58:05.153435894 +0000 UTC m=+3029.861251600" observedRunningTime="2026-01-27 20:58:06.509569867 +0000 UTC m=+3031.217385583" watchObservedRunningTime="2026-01-27 20:58:06.514624691 +0000 UTC m=+3031.222440397" Jan 27 20:58:06 crc kubenswrapper[4858]: I0127 20:58:06.538881 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=3.243600136 podStartE2EDuration="3.538845058s" podCreationTimestamp="2026-01-27 20:58:03 +0000 UTC" firstStartedPulling="2026-01-27 20:58:04.854672482 +0000 UTC m=+3029.562488188" lastFinishedPulling="2026-01-27 20:58:05.149917364 +0000 UTC m=+3029.857733110" observedRunningTime="2026-01-27 20:58:06.527747263 +0000 UTC m=+3031.235562989" watchObservedRunningTime="2026-01-27 20:58:06.538845058 +0000 UTC m=+3031.246660774" Jan 27 20:58:09 crc kubenswrapper[4858]: I0127 20:58:09.040288 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 27 20:58:09 crc kubenswrapper[4858]: I0127 20:58:09.115216 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:09 crc kubenswrapper[4858]: I0127 20:58:09.168113 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:13 crc kubenswrapper[4858]: I0127 20:58:13.071202 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:58:13 crc kubenswrapper[4858]: E0127 20:58:13.072012 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:58:14 crc kubenswrapper[4858]: I0127 20:58:14.298193 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 27 20:58:14 crc kubenswrapper[4858]: I0127 20:58:14.300127 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 27 20:58:14 crc kubenswrapper[4858]: I0127 20:58:14.315671 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 27 20:58:24 crc kubenswrapper[4858]: I0127 20:58:24.072166 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:58:24 crc kubenswrapper[4858]: E0127 20:58:24.073694 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.736699 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9p6hm"] Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.743895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.756434 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9p6hm"] Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.872756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-utilities\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.872792 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfz52\" (UniqueName: \"kubernetes.io/projected/2c4005ab-2404-4ade-bea7-2859f0bcf6de-kube-api-access-cfz52\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.872844 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-catalog-content\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.975252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-utilities\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.975309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfz52\" (UniqueName: \"kubernetes.io/projected/2c4005ab-2404-4ade-bea7-2859f0bcf6de-kube-api-access-cfz52\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.975383 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-catalog-content\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.975899 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-utilities\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:30 crc kubenswrapper[4858]: I0127 20:58:30.975922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-catalog-content\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:31 crc kubenswrapper[4858]: I0127 20:58:31.002997 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfz52\" (UniqueName: \"kubernetes.io/projected/2c4005ab-2404-4ade-bea7-2859f0bcf6de-kube-api-access-cfz52\") pod \"certified-operators-9p6hm\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:31 crc kubenswrapper[4858]: I0127 20:58:31.066043 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:31 crc kubenswrapper[4858]: I0127 20:58:31.643646 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9p6hm"] Jan 27 20:58:31 crc kubenswrapper[4858]: I0127 20:58:31.739688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p6hm" event={"ID":"2c4005ab-2404-4ade-bea7-2859f0bcf6de","Type":"ContainerStarted","Data":"40f8c4c2bd3ce71d4b3a5bff738f55b9cc225918aa81c400f11ff03a31b819c8"} Jan 27 20:58:32 crc kubenswrapper[4858]: I0127 20:58:32.756881 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerID="47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e" exitCode=0 Jan 27 20:58:32 crc kubenswrapper[4858]: I0127 20:58:32.756998 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p6hm" event={"ID":"2c4005ab-2404-4ade-bea7-2859f0bcf6de","Type":"ContainerDied","Data":"47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e"} Jan 27 20:58:33 crc kubenswrapper[4858]: I0127 20:58:33.775936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p6hm" event={"ID":"2c4005ab-2404-4ade-bea7-2859f0bcf6de","Type":"ContainerStarted","Data":"4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10"} Jan 27 20:58:34 crc kubenswrapper[4858]: I0127 20:58:34.786588 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerID="4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10" exitCode=0 Jan 27 20:58:34 crc kubenswrapper[4858]: I0127 20:58:34.786680 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p6hm" event={"ID":"2c4005ab-2404-4ade-bea7-2859f0bcf6de","Type":"ContainerDied","Data":"4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10"} Jan 27 20:58:35 crc kubenswrapper[4858]: I0127 20:58:35.797757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p6hm" event={"ID":"2c4005ab-2404-4ade-bea7-2859f0bcf6de","Type":"ContainerStarted","Data":"5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6"} Jan 27 20:58:35 crc kubenswrapper[4858]: I0127 20:58:35.822693 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9p6hm" podStartSLOduration=3.371199982 podStartE2EDuration="5.822669334s" podCreationTimestamp="2026-01-27 20:58:30 +0000 UTC" firstStartedPulling="2026-01-27 20:58:32.760248547 +0000 UTC m=+3057.468064293" lastFinishedPulling="2026-01-27 20:58:35.211717939 +0000 UTC m=+3059.919533645" observedRunningTime="2026-01-27 20:58:35.815091789 +0000 UTC m=+3060.522907495" watchObservedRunningTime="2026-01-27 20:58:35.822669334 +0000 UTC m=+3060.530485040" Jan 27 20:58:37 crc kubenswrapper[4858]: I0127 20:58:37.071774 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:58:37 crc kubenswrapper[4858]: E0127 20:58:37.072070 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:58:41 crc kubenswrapper[4858]: I0127 20:58:41.066667 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:41 crc kubenswrapper[4858]: I0127 20:58:41.067206 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:41 crc kubenswrapper[4858]: I0127 20:58:41.142627 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:41 crc kubenswrapper[4858]: I0127 20:58:41.911222 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:41 crc kubenswrapper[4858]: I0127 20:58:41.977900 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9p6hm"] Jan 27 20:58:43 crc kubenswrapper[4858]: I0127 20:58:43.886266 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9p6hm" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="registry-server" containerID="cri-o://5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6" gracePeriod=2 Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.372155 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.489464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfz52\" (UniqueName: \"kubernetes.io/projected/2c4005ab-2404-4ade-bea7-2859f0bcf6de-kube-api-access-cfz52\") pod \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.489738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-catalog-content\") pod \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.489772 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-utilities\") pod \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\" (UID: \"2c4005ab-2404-4ade-bea7-2859f0bcf6de\") " Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.490583 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-utilities" (OuterVolumeSpecName: "utilities") pod "2c4005ab-2404-4ade-bea7-2859f0bcf6de" (UID: "2c4005ab-2404-4ade-bea7-2859f0bcf6de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.495754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4005ab-2404-4ade-bea7-2859f0bcf6de-kube-api-access-cfz52" (OuterVolumeSpecName: "kube-api-access-cfz52") pod "2c4005ab-2404-4ade-bea7-2859f0bcf6de" (UID: "2c4005ab-2404-4ade-bea7-2859f0bcf6de"). InnerVolumeSpecName "kube-api-access-cfz52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.592160 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.592198 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfz52\" (UniqueName: \"kubernetes.io/projected/2c4005ab-2404-4ade-bea7-2859f0bcf6de-kube-api-access-cfz52\") on node \"crc\" DevicePath \"\"" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.897722 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerID="5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6" exitCode=0 Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.897774 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p6hm" event={"ID":"2c4005ab-2404-4ade-bea7-2859f0bcf6de","Type":"ContainerDied","Data":"5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6"} Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.897803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9p6hm" event={"ID":"2c4005ab-2404-4ade-bea7-2859f0bcf6de","Type":"ContainerDied","Data":"40f8c4c2bd3ce71d4b3a5bff738f55b9cc225918aa81c400f11ff03a31b819c8"} Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.897812 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9p6hm" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.897835 4858 scope.go:117] "RemoveContainer" containerID="5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.919979 4858 scope.go:117] "RemoveContainer" containerID="4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10" Jan 27 20:58:44 crc kubenswrapper[4858]: I0127 20:58:44.941543 4858 scope.go:117] "RemoveContainer" containerID="47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.010929 4858 scope.go:117] "RemoveContainer" containerID="5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6" Jan 27 20:58:45 crc kubenswrapper[4858]: E0127 20:58:45.011352 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6\": container with ID starting with 5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6 not found: ID does not exist" containerID="5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.011389 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6"} err="failed to get container status \"5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6\": rpc error: code = NotFound desc = could not find container \"5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6\": container with ID starting with 5f036dc7e879cfa2037531cda4d09f5f555b083f3acaa07eb071150c1a4465f6 not found: ID does not exist" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.011415 4858 scope.go:117] "RemoveContainer" containerID="4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10" Jan 27 20:58:45 crc kubenswrapper[4858]: E0127 20:58:45.011845 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10\": container with ID starting with 4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10 not found: ID does not exist" containerID="4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.011890 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10"} err="failed to get container status \"4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10\": rpc error: code = NotFound desc = could not find container \"4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10\": container with ID starting with 4bd5013aaa41bce388b4cef1bff14f5fd3e88b8054bf992e20220247dd69cf10 not found: ID does not exist" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.011909 4858 scope.go:117] "RemoveContainer" containerID="47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e" Jan 27 20:58:45 crc kubenswrapper[4858]: E0127 20:58:45.012272 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e\": container with ID starting with 47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e not found: ID does not exist" containerID="47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.012296 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e"} err="failed to get container status \"47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e\": rpc error: code = NotFound desc = could not find container \"47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e\": container with ID starting with 47a1877aec150d40167e4b558e458a22f2b70eda64296843086a93bce5ec6d7e not found: ID does not exist" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.293440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c4005ab-2404-4ade-bea7-2859f0bcf6de" (UID: "2c4005ab-2404-4ade-bea7-2859f0bcf6de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.307198 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4005ab-2404-4ade-bea7-2859f0bcf6de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.544435 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9p6hm"] Jan 27 20:58:45 crc kubenswrapper[4858]: I0127 20:58:45.557519 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9p6hm"] Jan 27 20:58:46 crc kubenswrapper[4858]: I0127 20:58:46.085294 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" path="/var/lib/kubelet/pods/2c4005ab-2404-4ade-bea7-2859f0bcf6de/volumes" Jan 27 20:58:49 crc kubenswrapper[4858]: I0127 20:58:49.070761 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:58:49 crc kubenswrapper[4858]: E0127 20:58:49.071045 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:59:01 crc kubenswrapper[4858]: I0127 20:59:01.071224 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:59:01 crc kubenswrapper[4858]: E0127 20:59:01.072135 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:59:08 crc kubenswrapper[4858]: I0127 20:59:08.435466 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:59:08 crc kubenswrapper[4858]: I0127 20:59:08.436475 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="prometheus" containerID="cri-o://c8d2007c95307e3fd8a2616de50cfb7b72c811519dd56bba633427b9a89f46fb" gracePeriod=600 Jan 27 20:59:08 crc kubenswrapper[4858]: I0127 20:59:08.436601 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="config-reloader" containerID="cri-o://d8a7562fab9559e8b2a4896ed4b5bf668ee2be4e74b7f27fa4fa650ace0bbf57" gracePeriod=600 Jan 27 20:59:08 crc kubenswrapper[4858]: I0127 20:59:08.436582 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="thanos-sidecar" containerID="cri-o://f006c7feb5b087fe7ceeb39b589be407ce3441aae0d68fdf52f54cd807f4a6d6" gracePeriod=600 Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.197601 4858 generic.go:334] "Generic (PLEG): container finished" podID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerID="f006c7feb5b087fe7ceeb39b589be407ce3441aae0d68fdf52f54cd807f4a6d6" exitCode=0 Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.197928 4858 generic.go:334] "Generic (PLEG): container finished" podID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerID="d8a7562fab9559e8b2a4896ed4b5bf668ee2be4e74b7f27fa4fa650ace0bbf57" exitCode=0 Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.197937 4858 generic.go:334] "Generic (PLEG): container finished" podID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerID="c8d2007c95307e3fd8a2616de50cfb7b72c811519dd56bba633427b9a89f46fb" exitCode=0 Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.197689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerDied","Data":"f006c7feb5b087fe7ceeb39b589be407ce3441aae0d68fdf52f54cd807f4a6d6"} Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.197973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerDied","Data":"d8a7562fab9559e8b2a4896ed4b5bf668ee2be4e74b7f27fa4fa650ace0bbf57"} Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.197988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerDied","Data":"c8d2007c95307e3fd8a2616de50cfb7b72c811519dd56bba633427b9a89f46fb"} Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.422183 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480150 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-secret-combined-ca-bundle\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480207 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-2\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480383 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-tls-assets\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480453 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config-out\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480474 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm582\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-kube-api-access-bm582\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480620 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480693 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-thanos-prometheus-http-client-file\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.480992 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-1\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.481067 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-0\") pod \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\" (UID: \"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0\") " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.481928 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.482451 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.483743 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.488291 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.489725 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-kube-api-access-bm582" (OuterVolumeSpecName: "kube-api-access-bm582") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "kube-api-access-bm582". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.490755 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config-out" (OuterVolumeSpecName: "config-out") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.491331 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.494170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.496611 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.499765 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.499862 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config" (OuterVolumeSpecName: "config") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.512725 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "pvc-805dfc34-a393-4134-854b-f25365c0a015". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.583887 4858 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.583951 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") on node \"crc\" " Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.583968 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.583981 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.583992 4858 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.584005 4858 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.584016 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.584028 4858 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.584041 4858 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.584054 4858 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.584065 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm582\" (UniqueName: \"kubernetes.io/projected/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-kube-api-access-bm582\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.584075 4858 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.588399 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config" (OuterVolumeSpecName: "web-config") pod "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" (UID: "fba3a657-b6b7-4fb2-87f6-1e1f25626dd0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.611376 4858 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.611619 4858 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-805dfc34-a393-4134-854b-f25365c0a015" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015") on node "crc" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.686460 4858 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:09 crc kubenswrapper[4858]: I0127 20:59:09.686510 4858 reconciler_common.go:293] "Volume detached for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") on node \"crc\" DevicePath \"\"" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.209766 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fba3a657-b6b7-4fb2-87f6-1e1f25626dd0","Type":"ContainerDied","Data":"bcd9bc9c355a98976b006fa1e705745b4be5427ffd919fc52118a33cb0bf0f65"} Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.209891 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.210123 4858 scope.go:117] "RemoveContainer" containerID="f006c7feb5b087fe7ceeb39b589be407ce3441aae0d68fdf52f54cd807f4a6d6" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.250443 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.256168 4858 scope.go:117] "RemoveContainer" containerID="d8a7562fab9559e8b2a4896ed4b5bf668ee2be4e74b7f27fa4fa650ace0bbf57" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.287590 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.299216 4858 scope.go:117] "RemoveContainer" containerID="c8d2007c95307e3fd8a2616de50cfb7b72c811519dd56bba633427b9a89f46fb" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.321525 4858 scope.go:117] "RemoveContainer" containerID="c78c0542eef88e01616f455080f4df87d1dfe0c83fe75318e26106a71f1b34cf" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345090 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:59:10 crc kubenswrapper[4858]: E0127 20:59:10.345804 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="registry-server" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345832 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="registry-server" Jan 27 20:59:10 crc kubenswrapper[4858]: E0127 20:59:10.345864 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="prometheus" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345871 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="prometheus" Jan 27 20:59:10 crc kubenswrapper[4858]: E0127 20:59:10.345885 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="thanos-sidecar" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345892 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="thanos-sidecar" Jan 27 20:59:10 crc kubenswrapper[4858]: E0127 20:59:10.345912 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="extract-utilities" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345918 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="extract-utilities" Jan 27 20:59:10 crc kubenswrapper[4858]: E0127 20:59:10.345929 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="config-reloader" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345934 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="config-reloader" Jan 27 20:59:10 crc kubenswrapper[4858]: E0127 20:59:10.345945 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="init-config-reloader" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345951 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="init-config-reloader" Jan 27 20:59:10 crc kubenswrapper[4858]: E0127 20:59:10.345961 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="extract-content" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.345966 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="extract-content" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.346188 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="thanos-sidecar" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.346211 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4005ab-2404-4ade-bea7-2859f0bcf6de" containerName="registry-server" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.346227 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="config-reloader" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.346237 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" containerName="prometheus" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.348192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.352386 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.352580 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.354618 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.354946 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.355151 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.356153 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.360576 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.361959 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pdrbd" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.371296 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.401819 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.401867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2g6t\" (UniqueName: \"kubernetes.io/projected/0511fb5d-042b-4155-88f8-3711949342c5-kube-api-access-b2g6t\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.401912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-config\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402356 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402408 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0511fb5d-042b-4155-88f8-3711949342c5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402560 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0511fb5d-042b-4155-88f8-3711949342c5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.402908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.403047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.505915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2g6t\" (UniqueName: \"kubernetes.io/projected/0511fb5d-042b-4155-88f8-3711949342c5-kube-api-access-b2g6t\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-config\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0511fb5d-042b-4155-88f8-3711949342c5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506754 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0511fb5d-042b-4155-88f8-3711949342c5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.506861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.507430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.507447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.508755 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0511fb5d-042b-4155-88f8-3711949342c5-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.511093 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.511124 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/803e592a1a81ac6dfdcc5cad0d3e656e83a32ab2cebbc52f70d41fc2b9c7180d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.511692 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0511fb5d-042b-4155-88f8-3711949342c5-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.512285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.513578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-config\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.514427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.514850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.516093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.517591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0511fb5d-042b-4155-88f8-3711949342c5-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.524096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0511fb5d-042b-4155-88f8-3711949342c5-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.527264 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2g6t\" (UniqueName: \"kubernetes.io/projected/0511fb5d-042b-4155-88f8-3711949342c5-kube-api-access-b2g6t\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.559292 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-805dfc34-a393-4134-854b-f25365c0a015\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-805dfc34-a393-4134-854b-f25365c0a015\") pod \"prometheus-metric-storage-0\" (UID: \"0511fb5d-042b-4155-88f8-3711949342c5\") " pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:10 crc kubenswrapper[4858]: I0127 20:59:10.702237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:11 crc kubenswrapper[4858]: I0127 20:59:11.206108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 20:59:11 crc kubenswrapper[4858]: I0127 20:59:11.221028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0511fb5d-042b-4155-88f8-3711949342c5","Type":"ContainerStarted","Data":"ed5bbfe435a5c33582f21a40487f664e07f1b316eccd6e2aded72c7f69e4c402"} Jan 27 20:59:12 crc kubenswrapper[4858]: I0127 20:59:12.086522 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba3a657-b6b7-4fb2-87f6-1e1f25626dd0" path="/var/lib/kubelet/pods/fba3a657-b6b7-4fb2-87f6-1e1f25626dd0/volumes" Jan 27 20:59:14 crc kubenswrapper[4858]: I0127 20:59:14.070894 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:59:14 crc kubenswrapper[4858]: E0127 20:59:14.071588 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:59:15 crc kubenswrapper[4858]: I0127 20:59:15.261346 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0511fb5d-042b-4155-88f8-3711949342c5","Type":"ContainerStarted","Data":"29cdd94c12a5f4152ae10763bc2931c2f89947bb768e4c5a6eff7e5ae26ca46a"} Jan 27 20:59:23 crc kubenswrapper[4858]: I0127 20:59:23.343962 4858 generic.go:334] "Generic (PLEG): container finished" podID="0511fb5d-042b-4155-88f8-3711949342c5" containerID="29cdd94c12a5f4152ae10763bc2931c2f89947bb768e4c5a6eff7e5ae26ca46a" exitCode=0 Jan 27 20:59:23 crc kubenswrapper[4858]: I0127 20:59:23.344063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0511fb5d-042b-4155-88f8-3711949342c5","Type":"ContainerDied","Data":"29cdd94c12a5f4152ae10763bc2931c2f89947bb768e4c5a6eff7e5ae26ca46a"} Jan 27 20:59:24 crc kubenswrapper[4858]: I0127 20:59:24.356076 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0511fb5d-042b-4155-88f8-3711949342c5","Type":"ContainerStarted","Data":"e1978dcf805564f0870b51c58ef4b23f5133305b10f083451afc6d652c588f58"} Jan 27 20:59:28 crc kubenswrapper[4858]: I0127 20:59:28.078608 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:59:28 crc kubenswrapper[4858]: E0127 20:59:28.079642 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:59:28 crc kubenswrapper[4858]: I0127 20:59:28.400965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0511fb5d-042b-4155-88f8-3711949342c5","Type":"ContainerStarted","Data":"30cd9272bef86e277d89858e7207ca5a11d1721c97e94f2483718b589a98b48a"} Jan 27 20:59:28 crc kubenswrapper[4858]: I0127 20:59:28.401022 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0511fb5d-042b-4155-88f8-3711949342c5","Type":"ContainerStarted","Data":"77f2290e01bb9334191753dbd8ebfe4b2ecc7cf0ab9a10a89dac92dcaafdba84"} Jan 27 20:59:28 crc kubenswrapper[4858]: I0127 20:59:28.439184 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.43916082 podStartE2EDuration="18.43916082s" podCreationTimestamp="2026-01-27 20:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 20:59:28.429278669 +0000 UTC m=+3113.137094395" watchObservedRunningTime="2026-01-27 20:59:28.43916082 +0000 UTC m=+3113.146976526" Jan 27 20:59:30 crc kubenswrapper[4858]: I0127 20:59:30.703091 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:40 crc kubenswrapper[4858]: I0127 20:59:40.703470 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:40 crc kubenswrapper[4858]: I0127 20:59:40.710336 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:41 crc kubenswrapper[4858]: I0127 20:59:41.530320 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 20:59:43 crc kubenswrapper[4858]: I0127 20:59:43.071214 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:59:43 crc kubenswrapper[4858]: E0127 20:59:43.071573 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.856155 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.858077 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.860967 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.861408 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.861425 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.862124 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g529f" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.866578 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977344 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-config-data\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977382 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9mw\" (UniqueName: \"kubernetes.io/projected/0671e111-61e9-439b-9457-c29b7d18a1f7-kube-api-access-5x9mw\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977541 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:48 crc kubenswrapper[4858]: I0127 20:59:48.977777 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079472 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079609 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-config-data\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.079773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9mw\" (UniqueName: \"kubernetes.io/projected/0671e111-61e9-439b-9457-c29b7d18a1f7-kube-api-access-5x9mw\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.080467 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.080700 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.081406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.081701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-config-data\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.081799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.087689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.088135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.088471 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.104445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9mw\" (UniqueName: \"kubernetes.io/projected/0671e111-61e9-439b-9457-c29b7d18a1f7-kube-api-access-5x9mw\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.113420 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.180872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 20:59:49 crc kubenswrapper[4858]: I0127 20:59:49.666201 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 20:59:50 crc kubenswrapper[4858]: I0127 20:59:50.604606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0671e111-61e9-439b-9457-c29b7d18a1f7","Type":"ContainerStarted","Data":"e6f8cce5105724c06ad3a727828346d0c0a742e58a1e428fafdd052e8d207f80"} Jan 27 20:59:57 crc kubenswrapper[4858]: I0127 20:59:57.071600 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 20:59:57 crc kubenswrapper[4858]: E0127 20:59:57.072201 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.164405 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7"] Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.167614 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.173033 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.174020 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.221415 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7"] Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.328840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3ac7699-9e31-4b84-99ce-403308136463-secret-volume\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.329092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zg7j\" (UniqueName: \"kubernetes.io/projected/c3ac7699-9e31-4b84-99ce-403308136463-kube-api-access-8zg7j\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.329140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ac7699-9e31-4b84-99ce-403308136463-config-volume\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.433802 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zg7j\" (UniqueName: \"kubernetes.io/projected/c3ac7699-9e31-4b84-99ce-403308136463-kube-api-access-8zg7j\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.433929 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ac7699-9e31-4b84-99ce-403308136463-config-volume\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.434111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3ac7699-9e31-4b84-99ce-403308136463-secret-volume\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.435812 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ac7699-9e31-4b84-99ce-403308136463-config-volume\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.446489 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3ac7699-9e31-4b84-99ce-403308136463-secret-volume\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.458255 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zg7j\" (UniqueName: \"kubernetes.io/projected/c3ac7699-9e31-4b84-99ce-403308136463-kube-api-access-8zg7j\") pod \"collect-profiles-29492460-glpv7\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:00 crc kubenswrapper[4858]: I0127 21:00:00.506424 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:01 crc kubenswrapper[4858]: I0127 21:00:01.048887 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7"] Jan 27 21:00:01 crc kubenswrapper[4858]: I0127 21:00:01.732444 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" event={"ID":"c3ac7699-9e31-4b84-99ce-403308136463","Type":"ContainerStarted","Data":"c14238c879cf14bd5a656c764b5626e64fa0679703813653d5b3cefcff125bfc"} Jan 27 21:00:01 crc kubenswrapper[4858]: I0127 21:00:01.732955 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" event={"ID":"c3ac7699-9e31-4b84-99ce-403308136463","Type":"ContainerStarted","Data":"ed50a00fd14cf97083e0a6710dd8d8040fee123b12bfd9db1cbe528c63b5aa04"} Jan 27 21:00:01 crc kubenswrapper[4858]: I0127 21:00:01.757642 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" podStartSLOduration=1.757615833 podStartE2EDuration="1.757615833s" podCreationTimestamp="2026-01-27 21:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:00:01.748035961 +0000 UTC m=+3146.455851687" watchObservedRunningTime="2026-01-27 21:00:01.757615833 +0000 UTC m=+3146.465431559" Jan 27 21:00:02 crc kubenswrapper[4858]: I0127 21:00:02.745085 4858 generic.go:334] "Generic (PLEG): container finished" podID="c3ac7699-9e31-4b84-99ce-403308136463" containerID="c14238c879cf14bd5a656c764b5626e64fa0679703813653d5b3cefcff125bfc" exitCode=0 Jan 27 21:00:02 crc kubenswrapper[4858]: I0127 21:00:02.745140 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" event={"ID":"c3ac7699-9e31-4b84-99ce-403308136463","Type":"ContainerDied","Data":"c14238c879cf14bd5a656c764b5626e64fa0679703813653d5b3cefcff125bfc"} Jan 27 21:00:02 crc kubenswrapper[4858]: I0127 21:00:02.747771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0671e111-61e9-439b-9457-c29b7d18a1f7","Type":"ContainerStarted","Data":"35ed16ec0d2db2d7ec11649d4c05327e45956530b22b23107153667079075a32"} Jan 27 21:00:02 crc kubenswrapper[4858]: I0127 21:00:02.800777 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.022475978 podStartE2EDuration="15.80074961s" podCreationTimestamp="2026-01-27 20:59:47 +0000 UTC" firstStartedPulling="2026-01-27 20:59:49.676701718 +0000 UTC m=+3134.384517454" lastFinishedPulling="2026-01-27 21:00:00.45497538 +0000 UTC m=+3145.162791086" observedRunningTime="2026-01-27 21:00:02.791198869 +0000 UTC m=+3147.499014595" watchObservedRunningTime="2026-01-27 21:00:02.80074961 +0000 UTC m=+3147.508565316" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.195527 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.332769 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zg7j\" (UniqueName: \"kubernetes.io/projected/c3ac7699-9e31-4b84-99ce-403308136463-kube-api-access-8zg7j\") pod \"c3ac7699-9e31-4b84-99ce-403308136463\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.333022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ac7699-9e31-4b84-99ce-403308136463-config-volume\") pod \"c3ac7699-9e31-4b84-99ce-403308136463\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.333073 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3ac7699-9e31-4b84-99ce-403308136463-secret-volume\") pod \"c3ac7699-9e31-4b84-99ce-403308136463\" (UID: \"c3ac7699-9e31-4b84-99ce-403308136463\") " Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.333687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ac7699-9e31-4b84-99ce-403308136463-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3ac7699-9e31-4b84-99ce-403308136463" (UID: "c3ac7699-9e31-4b84-99ce-403308136463"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.340775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ac7699-9e31-4b84-99ce-403308136463-kube-api-access-8zg7j" (OuterVolumeSpecName: "kube-api-access-8zg7j") pod "c3ac7699-9e31-4b84-99ce-403308136463" (UID: "c3ac7699-9e31-4b84-99ce-403308136463"). InnerVolumeSpecName "kube-api-access-8zg7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.345646 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ac7699-9e31-4b84-99ce-403308136463-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c3ac7699-9e31-4b84-99ce-403308136463" (UID: "c3ac7699-9e31-4b84-99ce-403308136463"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.436136 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ac7699-9e31-4b84-99ce-403308136463-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.436189 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3ac7699-9e31-4b84-99ce-403308136463-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.436203 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zg7j\" (UniqueName: \"kubernetes.io/projected/c3ac7699-9e31-4b84-99ce-403308136463-kube-api-access-8zg7j\") on node \"crc\" DevicePath \"\"" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.783767 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" event={"ID":"c3ac7699-9e31-4b84-99ce-403308136463","Type":"ContainerDied","Data":"ed50a00fd14cf97083e0a6710dd8d8040fee123b12bfd9db1cbe528c63b5aa04"} Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.783850 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed50a00fd14cf97083e0a6710dd8d8040fee123b12bfd9db1cbe528c63b5aa04" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.784014 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7" Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.873412 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj"] Jan 27 21:00:04 crc kubenswrapper[4858]: I0127 21:00:04.887256 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492415-rjcqj"] Jan 27 21:00:06 crc kubenswrapper[4858]: I0127 21:00:06.087157 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120a892c-adc8-488d-91e7-3c76b47af2fb" path="/var/lib/kubelet/pods/120a892c-adc8-488d-91e7-3c76b47af2fb/volumes" Jan 27 21:00:10 crc kubenswrapper[4858]: I0127 21:00:10.071521 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:00:10 crc kubenswrapper[4858]: E0127 21:00:10.073776 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:00:22 crc kubenswrapper[4858]: I0127 21:00:22.072047 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:00:22 crc kubenswrapper[4858]: E0127 21:00:22.072811 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:00:36 crc kubenswrapper[4858]: I0127 21:00:36.079962 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:00:36 crc kubenswrapper[4858]: E0127 21:00:36.082804 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:00:48 crc kubenswrapper[4858]: I0127 21:00:48.071119 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:00:48 crc kubenswrapper[4858]: E0127 21:00:48.072215 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.186396 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492461-vqlt7"] Jan 27 21:01:00 crc kubenswrapper[4858]: E0127 21:01:00.187452 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ac7699-9e31-4b84-99ce-403308136463" containerName="collect-profiles" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.187466 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ac7699-9e31-4b84-99ce-403308136463" containerName="collect-profiles" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.187683 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ac7699-9e31-4b84-99ce-403308136463" containerName="collect-profiles" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.188992 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.198044 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492461-vqlt7"] Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.312226 4858 scope.go:117] "RemoveContainer" containerID="ffcacacbea21b164f70d9b3388c5d22f4b4efaef049aff948c3b43dccbf312e7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.367817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-config-data\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.367921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-fernet-keys\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.367978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-combined-ca-bundle\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.368007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95jx7\" (UniqueName: \"kubernetes.io/projected/5b18bb1a-5b75-4c25-b553-12b03b2492a0-kube-api-access-95jx7\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.470297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-config-data\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.470703 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-fernet-keys\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.470753 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-combined-ca-bundle\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.470774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95jx7\" (UniqueName: \"kubernetes.io/projected/5b18bb1a-5b75-4c25-b553-12b03b2492a0-kube-api-access-95jx7\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.476829 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-combined-ca-bundle\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.476867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-config-data\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.478481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-fernet-keys\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.486832 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95jx7\" (UniqueName: \"kubernetes.io/projected/5b18bb1a-5b75-4c25-b553-12b03b2492a0-kube-api-access-95jx7\") pod \"keystone-cron-29492461-vqlt7\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:00 crc kubenswrapper[4858]: I0127 21:01:00.536133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:01 crc kubenswrapper[4858]: I0127 21:01:01.056195 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492461-vqlt7"] Jan 27 21:01:01 crc kubenswrapper[4858]: I0127 21:01:01.073358 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:01:01 crc kubenswrapper[4858]: E0127 21:01:01.073891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:01:01 crc kubenswrapper[4858]: I0127 21:01:01.339774 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-vqlt7" event={"ID":"5b18bb1a-5b75-4c25-b553-12b03b2492a0","Type":"ContainerStarted","Data":"63854d230523de0df7f8f9a9366e8cb9de6dc569d859017e8ecf3822978b8bf5"} Jan 27 21:01:01 crc kubenswrapper[4858]: I0127 21:01:01.340082 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-vqlt7" event={"ID":"5b18bb1a-5b75-4c25-b553-12b03b2492a0","Type":"ContainerStarted","Data":"31489bb0a696a874e071b911809be0ef297a7eeeb97d59caa313e30993e480d2"} Jan 27 21:01:01 crc kubenswrapper[4858]: I0127 21:01:01.358882 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492461-vqlt7" podStartSLOduration=1.358857577 podStartE2EDuration="1.358857577s" podCreationTimestamp="2026-01-27 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:01:01.356252453 +0000 UTC m=+3206.064068179" watchObservedRunningTime="2026-01-27 21:01:01.358857577 +0000 UTC m=+3206.066673283" Jan 27 21:01:05 crc kubenswrapper[4858]: I0127 21:01:05.382937 4858 generic.go:334] "Generic (PLEG): container finished" podID="5b18bb1a-5b75-4c25-b553-12b03b2492a0" containerID="63854d230523de0df7f8f9a9366e8cb9de6dc569d859017e8ecf3822978b8bf5" exitCode=0 Jan 27 21:01:05 crc kubenswrapper[4858]: I0127 21:01:05.383016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-vqlt7" event={"ID":"5b18bb1a-5b75-4c25-b553-12b03b2492a0","Type":"ContainerDied","Data":"63854d230523de0df7f8f9a9366e8cb9de6dc569d859017e8ecf3822978b8bf5"} Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.816319 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.910468 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-combined-ca-bundle\") pod \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.910613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95jx7\" (UniqueName: \"kubernetes.io/projected/5b18bb1a-5b75-4c25-b553-12b03b2492a0-kube-api-access-95jx7\") pod \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.910658 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-config-data\") pod \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.910733 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-fernet-keys\") pod \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\" (UID: \"5b18bb1a-5b75-4c25-b553-12b03b2492a0\") " Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.916644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5b18bb1a-5b75-4c25-b553-12b03b2492a0" (UID: "5b18bb1a-5b75-4c25-b553-12b03b2492a0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.927749 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b18bb1a-5b75-4c25-b553-12b03b2492a0-kube-api-access-95jx7" (OuterVolumeSpecName: "kube-api-access-95jx7") pod "5b18bb1a-5b75-4c25-b553-12b03b2492a0" (UID: "5b18bb1a-5b75-4c25-b553-12b03b2492a0"). InnerVolumeSpecName "kube-api-access-95jx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.944845 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b18bb1a-5b75-4c25-b553-12b03b2492a0" (UID: "5b18bb1a-5b75-4c25-b553-12b03b2492a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:01:06 crc kubenswrapper[4858]: I0127 21:01:06.977672 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-config-data" (OuterVolumeSpecName: "config-data") pod "5b18bb1a-5b75-4c25-b553-12b03b2492a0" (UID: "5b18bb1a-5b75-4c25-b553-12b03b2492a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:01:07 crc kubenswrapper[4858]: I0127 21:01:07.013650 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:07 crc kubenswrapper[4858]: I0127 21:01:07.013692 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95jx7\" (UniqueName: \"kubernetes.io/projected/5b18bb1a-5b75-4c25-b553-12b03b2492a0-kube-api-access-95jx7\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:07 crc kubenswrapper[4858]: I0127 21:01:07.013708 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:07 crc kubenswrapper[4858]: I0127 21:01:07.013719 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5b18bb1a-5b75-4c25-b553-12b03b2492a0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 21:01:07 crc kubenswrapper[4858]: I0127 21:01:07.402964 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492461-vqlt7" event={"ID":"5b18bb1a-5b75-4c25-b553-12b03b2492a0","Type":"ContainerDied","Data":"31489bb0a696a874e071b911809be0ef297a7eeeb97d59caa313e30993e480d2"} Jan 27 21:01:07 crc kubenswrapper[4858]: I0127 21:01:07.403021 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31489bb0a696a874e071b911809be0ef297a7eeeb97d59caa313e30993e480d2" Jan 27 21:01:07 crc kubenswrapper[4858]: I0127 21:01:07.403035 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492461-vqlt7" Jan 27 21:01:12 crc kubenswrapper[4858]: I0127 21:01:12.071290 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:01:12 crc kubenswrapper[4858]: E0127 21:01:12.073070 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:01:23 crc kubenswrapper[4858]: I0127 21:01:23.071711 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:01:23 crc kubenswrapper[4858]: E0127 21:01:23.072635 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:01:34 crc kubenswrapper[4858]: I0127 21:01:34.070725 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:01:34 crc kubenswrapper[4858]: I0127 21:01:34.731466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"0221581f682274ce510396feeb422f7fc8b447cf51bc13911aa2ff45fb1be6dd"} Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.151737 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttmn8"] Jan 27 21:01:56 crc kubenswrapper[4858]: E0127 21:01:56.153284 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b18bb1a-5b75-4c25-b553-12b03b2492a0" containerName="keystone-cron" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.153307 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b18bb1a-5b75-4c25-b553-12b03b2492a0" containerName="keystone-cron" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.153721 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b18bb1a-5b75-4c25-b553-12b03b2492a0" containerName="keystone-cron" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.156235 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.198661 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttmn8"] Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.268367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-942mk\" (UniqueName: \"kubernetes.io/projected/572611cf-d891-480f-82ae-37f3fb67df33-kube-api-access-942mk\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.268457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-utilities\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.268771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-catalog-content\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.370734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-942mk\" (UniqueName: \"kubernetes.io/projected/572611cf-d891-480f-82ae-37f3fb67df33-kube-api-access-942mk\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.370806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-utilities\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.370869 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-catalog-content\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.371647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-catalog-content\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.371649 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-utilities\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.394810 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-942mk\" (UniqueName: \"kubernetes.io/projected/572611cf-d891-480f-82ae-37f3fb67df33-kube-api-access-942mk\") pod \"community-operators-ttmn8\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:56 crc kubenswrapper[4858]: I0127 21:01:56.489538 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:01:57 crc kubenswrapper[4858]: I0127 21:01:57.037901 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttmn8"] Jan 27 21:01:57 crc kubenswrapper[4858]: I0127 21:01:57.977080 4858 generic.go:334] "Generic (PLEG): container finished" podID="572611cf-d891-480f-82ae-37f3fb67df33" containerID="355d7997fcc3ec7e0143ddaa0cbad0cc1f40ca45607548467620fdcac86856e2" exitCode=0 Jan 27 21:01:57 crc kubenswrapper[4858]: I0127 21:01:57.977130 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttmn8" event={"ID":"572611cf-d891-480f-82ae-37f3fb67df33","Type":"ContainerDied","Data":"355d7997fcc3ec7e0143ddaa0cbad0cc1f40ca45607548467620fdcac86856e2"} Jan 27 21:01:57 crc kubenswrapper[4858]: I0127 21:01:57.977159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttmn8" event={"ID":"572611cf-d891-480f-82ae-37f3fb67df33","Type":"ContainerStarted","Data":"dc5fe9b69bc1297d49d7afb28ec1dea22c9f4854061565f8e08f20da9f13e5c6"} Jan 27 21:01:59 crc kubenswrapper[4858]: I0127 21:01:59.997626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttmn8" event={"ID":"572611cf-d891-480f-82ae-37f3fb67df33","Type":"ContainerStarted","Data":"c3efb2d9da6a9b3d58561d7d54a6e4685335eb381edce317ef86e2ba1134ca16"} Jan 27 21:02:03 crc kubenswrapper[4858]: I0127 21:02:03.567365 4858 generic.go:334] "Generic (PLEG): container finished" podID="572611cf-d891-480f-82ae-37f3fb67df33" containerID="c3efb2d9da6a9b3d58561d7d54a6e4685335eb381edce317ef86e2ba1134ca16" exitCode=0 Jan 27 21:02:03 crc kubenswrapper[4858]: I0127 21:02:03.567582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttmn8" event={"ID":"572611cf-d891-480f-82ae-37f3fb67df33","Type":"ContainerDied","Data":"c3efb2d9da6a9b3d58561d7d54a6e4685335eb381edce317ef86e2ba1134ca16"} Jan 27 21:02:04 crc kubenswrapper[4858]: I0127 21:02:04.581917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttmn8" event={"ID":"572611cf-d891-480f-82ae-37f3fb67df33","Type":"ContainerStarted","Data":"ff506370e3ae2cb961916367b51c5d76212736b73311870e79b78d8c180edf0d"} Jan 27 21:02:04 crc kubenswrapper[4858]: I0127 21:02:04.608921 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttmn8" podStartSLOduration=2.533481276 podStartE2EDuration="8.608893036s" podCreationTimestamp="2026-01-27 21:01:56 +0000 UTC" firstStartedPulling="2026-01-27 21:01:57.979738114 +0000 UTC m=+3262.687553820" lastFinishedPulling="2026-01-27 21:02:04.055149874 +0000 UTC m=+3268.762965580" observedRunningTime="2026-01-27 21:02:04.597568994 +0000 UTC m=+3269.305384700" watchObservedRunningTime="2026-01-27 21:02:04.608893036 +0000 UTC m=+3269.316708742" Jan 27 21:02:06 crc kubenswrapper[4858]: I0127 21:02:06.490602 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:02:06 crc kubenswrapper[4858]: I0127 21:02:06.490964 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:02:07 crc kubenswrapper[4858]: I0127 21:02:07.558130 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ttmn8" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="registry-server" probeResult="failure" output=< Jan 27 21:02:07 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:02:07 crc kubenswrapper[4858]: > Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.558769 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.618906 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5n9k5"] Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.623223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.626614 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.647880 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5n9k5"] Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.750618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-utilities\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.750718 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-catalog-content\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.750809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2q5\" (UniqueName: \"kubernetes.io/projected/c17ccbf8-be6a-4826-ae19-921b9f45e38a-kube-api-access-wz2q5\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.853060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-catalog-content\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.853169 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2q5\" (UniqueName: \"kubernetes.io/projected/c17ccbf8-be6a-4826-ae19-921b9f45e38a-kube-api-access-wz2q5\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.853352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-utilities\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.853798 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-catalog-content\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.854022 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-utilities\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.877746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2q5\" (UniqueName: \"kubernetes.io/projected/c17ccbf8-be6a-4826-ae19-921b9f45e38a-kube-api-access-wz2q5\") pod \"redhat-marketplace-5n9k5\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:16 crc kubenswrapper[4858]: I0127 21:02:16.950993 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:17 crc kubenswrapper[4858]: I0127 21:02:17.185711 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttmn8"] Jan 27 21:02:17 crc kubenswrapper[4858]: I0127 21:02:17.562752 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5n9k5"] Jan 27 21:02:17 crc kubenswrapper[4858]: I0127 21:02:17.721693 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5n9k5" event={"ID":"c17ccbf8-be6a-4826-ae19-921b9f45e38a","Type":"ContainerStarted","Data":"263a06c2ec813f34c37b2cecf736edfbac7876f9dd2e5c7643261deda5cd2547"} Jan 27 21:02:17 crc kubenswrapper[4858]: I0127 21:02:17.721783 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ttmn8" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="registry-server" containerID="cri-o://ff506370e3ae2cb961916367b51c5d76212736b73311870e79b78d8c180edf0d" gracePeriod=2 Jan 27 21:02:18 crc kubenswrapper[4858]: I0127 21:02:18.740172 4858 generic.go:334] "Generic (PLEG): container finished" podID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerID="a9e37eea9784c400dabe6842b75b9e5966825e2c7d87d3df51462c16a2a9de87" exitCode=0 Jan 27 21:02:18 crc kubenswrapper[4858]: I0127 21:02:18.740290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5n9k5" event={"ID":"c17ccbf8-be6a-4826-ae19-921b9f45e38a","Type":"ContainerDied","Data":"a9e37eea9784c400dabe6842b75b9e5966825e2c7d87d3df51462c16a2a9de87"} Jan 27 21:02:18 crc kubenswrapper[4858]: I0127 21:02:18.750347 4858 generic.go:334] "Generic (PLEG): container finished" podID="572611cf-d891-480f-82ae-37f3fb67df33" containerID="ff506370e3ae2cb961916367b51c5d76212736b73311870e79b78d8c180edf0d" exitCode=0 Jan 27 21:02:18 crc kubenswrapper[4858]: I0127 21:02:18.750405 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttmn8" event={"ID":"572611cf-d891-480f-82ae-37f3fb67df33","Type":"ContainerDied","Data":"ff506370e3ae2cb961916367b51c5d76212736b73311870e79b78d8c180edf0d"} Jan 27 21:02:18 crc kubenswrapper[4858]: I0127 21:02:18.999638 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.100259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-utilities\") pod \"572611cf-d891-480f-82ae-37f3fb67df33\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.100393 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-catalog-content\") pod \"572611cf-d891-480f-82ae-37f3fb67df33\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.100501 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-942mk\" (UniqueName: \"kubernetes.io/projected/572611cf-d891-480f-82ae-37f3fb67df33-kube-api-access-942mk\") pod \"572611cf-d891-480f-82ae-37f3fb67df33\" (UID: \"572611cf-d891-480f-82ae-37f3fb67df33\") " Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.101391 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-utilities" (OuterVolumeSpecName: "utilities") pod "572611cf-d891-480f-82ae-37f3fb67df33" (UID: "572611cf-d891-480f-82ae-37f3fb67df33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.119039 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572611cf-d891-480f-82ae-37f3fb67df33-kube-api-access-942mk" (OuterVolumeSpecName: "kube-api-access-942mk") pod "572611cf-d891-480f-82ae-37f3fb67df33" (UID: "572611cf-d891-480f-82ae-37f3fb67df33"). InnerVolumeSpecName "kube-api-access-942mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.174940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "572611cf-d891-480f-82ae-37f3fb67df33" (UID: "572611cf-d891-480f-82ae-37f3fb67df33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.204626 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.204697 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572611cf-d891-480f-82ae-37f3fb67df33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.204717 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-942mk\" (UniqueName: \"kubernetes.io/projected/572611cf-d891-480f-82ae-37f3fb67df33-kube-api-access-942mk\") on node \"crc\" DevicePath \"\"" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.762991 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttmn8" event={"ID":"572611cf-d891-480f-82ae-37f3fb67df33","Type":"ContainerDied","Data":"dc5fe9b69bc1297d49d7afb28ec1dea22c9f4854061565f8e08f20da9f13e5c6"} Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.763071 4858 scope.go:117] "RemoveContainer" containerID="ff506370e3ae2cb961916367b51c5d76212736b73311870e79b78d8c180edf0d" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.763078 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttmn8" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.788857 4858 scope.go:117] "RemoveContainer" containerID="c3efb2d9da6a9b3d58561d7d54a6e4685335eb381edce317ef86e2ba1134ca16" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.807569 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttmn8"] Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.827132 4858 scope.go:117] "RemoveContainer" containerID="355d7997fcc3ec7e0143ddaa0cbad0cc1f40ca45607548467620fdcac86856e2" Jan 27 21:02:19 crc kubenswrapper[4858]: I0127 21:02:19.837487 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ttmn8"] Jan 27 21:02:20 crc kubenswrapper[4858]: I0127 21:02:20.086012 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572611cf-d891-480f-82ae-37f3fb67df33" path="/var/lib/kubelet/pods/572611cf-d891-480f-82ae-37f3fb67df33/volumes" Jan 27 21:02:20 crc kubenswrapper[4858]: I0127 21:02:20.774898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5n9k5" event={"ID":"c17ccbf8-be6a-4826-ae19-921b9f45e38a","Type":"ContainerStarted","Data":"3900ab582f4170a9b2d7f219699fdae54c1cada922ea4385e7081ef05ebf2e7d"} Jan 27 21:02:21 crc kubenswrapper[4858]: I0127 21:02:21.792274 4858 generic.go:334] "Generic (PLEG): container finished" podID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerID="3900ab582f4170a9b2d7f219699fdae54c1cada922ea4385e7081ef05ebf2e7d" exitCode=0 Jan 27 21:02:21 crc kubenswrapper[4858]: I0127 21:02:21.792401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5n9k5" event={"ID":"c17ccbf8-be6a-4826-ae19-921b9f45e38a","Type":"ContainerDied","Data":"3900ab582f4170a9b2d7f219699fdae54c1cada922ea4385e7081ef05ebf2e7d"} Jan 27 21:02:23 crc kubenswrapper[4858]: I0127 21:02:23.825644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5n9k5" event={"ID":"c17ccbf8-be6a-4826-ae19-921b9f45e38a","Type":"ContainerStarted","Data":"6f396aaedd96e82f11fce0321526dedb1f99c47de2b8b3d6dfce6dee0e18fee9"} Jan 27 21:02:26 crc kubenswrapper[4858]: I0127 21:02:26.951592 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:26 crc kubenswrapper[4858]: I0127 21:02:26.952318 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:27 crc kubenswrapper[4858]: I0127 21:02:27.008326 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:27 crc kubenswrapper[4858]: I0127 21:02:27.028710 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5n9k5" podStartSLOduration=7.002826138 podStartE2EDuration="11.028690469s" podCreationTimestamp="2026-01-27 21:02:16 +0000 UTC" firstStartedPulling="2026-01-27 21:02:18.742976364 +0000 UTC m=+3283.450792070" lastFinishedPulling="2026-01-27 21:02:22.768840685 +0000 UTC m=+3287.476656401" observedRunningTime="2026-01-27 21:02:23.855379384 +0000 UTC m=+3288.563195110" watchObservedRunningTime="2026-01-27 21:02:27.028690469 +0000 UTC m=+3291.736506175" Jan 27 21:02:37 crc kubenswrapper[4858]: I0127 21:02:37.028784 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:37 crc kubenswrapper[4858]: I0127 21:02:37.090805 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5n9k5"] Jan 27 21:02:38 crc kubenswrapper[4858]: I0127 21:02:38.023459 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5n9k5" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="registry-server" containerID="cri-o://6f396aaedd96e82f11fce0321526dedb1f99c47de2b8b3d6dfce6dee0e18fee9" gracePeriod=2 Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.033532 4858 generic.go:334] "Generic (PLEG): container finished" podID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerID="6f396aaedd96e82f11fce0321526dedb1f99c47de2b8b3d6dfce6dee0e18fee9" exitCode=0 Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.033743 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5n9k5" event={"ID":"c17ccbf8-be6a-4826-ae19-921b9f45e38a","Type":"ContainerDied","Data":"6f396aaedd96e82f11fce0321526dedb1f99c47de2b8b3d6dfce6dee0e18fee9"} Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.189602 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.292925 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz2q5\" (UniqueName: \"kubernetes.io/projected/c17ccbf8-be6a-4826-ae19-921b9f45e38a-kube-api-access-wz2q5\") pod \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.293199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-utilities\") pod \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.293286 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-catalog-content\") pod \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\" (UID: \"c17ccbf8-be6a-4826-ae19-921b9f45e38a\") " Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.294106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-utilities" (OuterVolumeSpecName: "utilities") pod "c17ccbf8-be6a-4826-ae19-921b9f45e38a" (UID: "c17ccbf8-be6a-4826-ae19-921b9f45e38a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.305846 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17ccbf8-be6a-4826-ae19-921b9f45e38a-kube-api-access-wz2q5" (OuterVolumeSpecName: "kube-api-access-wz2q5") pod "c17ccbf8-be6a-4826-ae19-921b9f45e38a" (UID: "c17ccbf8-be6a-4826-ae19-921b9f45e38a"). InnerVolumeSpecName "kube-api-access-wz2q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.321018 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c17ccbf8-be6a-4826-ae19-921b9f45e38a" (UID: "c17ccbf8-be6a-4826-ae19-921b9f45e38a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.395467 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz2q5\" (UniqueName: \"kubernetes.io/projected/c17ccbf8-be6a-4826-ae19-921b9f45e38a-kube-api-access-wz2q5\") on node \"crc\" DevicePath \"\"" Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.395516 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:02:39 crc kubenswrapper[4858]: I0127 21:02:39.395526 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c17ccbf8-be6a-4826-ae19-921b9f45e38a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:02:40 crc kubenswrapper[4858]: I0127 21:02:40.049017 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5n9k5" event={"ID":"c17ccbf8-be6a-4826-ae19-921b9f45e38a","Type":"ContainerDied","Data":"263a06c2ec813f34c37b2cecf736edfbac7876f9dd2e5c7643261deda5cd2547"} Jan 27 21:02:40 crc kubenswrapper[4858]: I0127 21:02:40.049086 4858 scope.go:117] "RemoveContainer" containerID="6f396aaedd96e82f11fce0321526dedb1f99c47de2b8b3d6dfce6dee0e18fee9" Jan 27 21:02:40 crc kubenswrapper[4858]: I0127 21:02:40.049127 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5n9k5" Jan 27 21:02:40 crc kubenswrapper[4858]: I0127 21:02:40.076695 4858 scope.go:117] "RemoveContainer" containerID="3900ab582f4170a9b2d7f219699fdae54c1cada922ea4385e7081ef05ebf2e7d" Jan 27 21:02:40 crc kubenswrapper[4858]: I0127 21:02:40.105344 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5n9k5"] Jan 27 21:02:40 crc kubenswrapper[4858]: I0127 21:02:40.107539 4858 scope.go:117] "RemoveContainer" containerID="a9e37eea9784c400dabe6842b75b9e5966825e2c7d87d3df51462c16a2a9de87" Jan 27 21:02:40 crc kubenswrapper[4858]: I0127 21:02:40.125608 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5n9k5"] Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.677295 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6t7m7"] Jan 27 21:02:41 crc kubenswrapper[4858]: E0127 21:02:41.678030 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="registry-server" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678050 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="registry-server" Jan 27 21:02:41 crc kubenswrapper[4858]: E0127 21:02:41.678075 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="extract-content" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678083 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="extract-content" Jan 27 21:02:41 crc kubenswrapper[4858]: E0127 21:02:41.678094 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="registry-server" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678102 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="registry-server" Jan 27 21:02:41 crc kubenswrapper[4858]: E0127 21:02:41.678132 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="extract-utilities" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678139 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="extract-utilities" Jan 27 21:02:41 crc kubenswrapper[4858]: E0127 21:02:41.678149 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="extract-content" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678155 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="extract-content" Jan 27 21:02:41 crc kubenswrapper[4858]: E0127 21:02:41.678187 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="extract-utilities" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678195 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="extract-utilities" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678441 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="572611cf-d891-480f-82ae-37f3fb67df33" containerName="registry-server" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.678457 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" containerName="registry-server" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.680322 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.701726 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t7m7"] Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.763438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-catalog-content\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.763610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-utilities\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.764006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6cfb\" (UniqueName: \"kubernetes.io/projected/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-kube-api-access-s6cfb\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.866434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-utilities\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.866617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6cfb\" (UniqueName: \"kubernetes.io/projected/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-kube-api-access-s6cfb\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.866691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-catalog-content\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.867271 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-catalog-content\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.867495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-utilities\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:41 crc kubenswrapper[4858]: I0127 21:02:41.887602 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6cfb\" (UniqueName: \"kubernetes.io/projected/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-kube-api-access-s6cfb\") pod \"redhat-operators-6t7m7\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:42 crc kubenswrapper[4858]: I0127 21:02:42.007047 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:02:42 crc kubenswrapper[4858]: I0127 21:02:42.093742 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17ccbf8-be6a-4826-ae19-921b9f45e38a" path="/var/lib/kubelet/pods/c17ccbf8-be6a-4826-ae19-921b9f45e38a/volumes" Jan 27 21:02:42 crc kubenswrapper[4858]: I0127 21:02:42.547507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6t7m7"] Jan 27 21:02:43 crc kubenswrapper[4858]: I0127 21:02:43.086494 4858 generic.go:334] "Generic (PLEG): container finished" podID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerID="f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa" exitCode=0 Jan 27 21:02:43 crc kubenswrapper[4858]: I0127 21:02:43.086586 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t7m7" event={"ID":"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0","Type":"ContainerDied","Data":"f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa"} Jan 27 21:02:43 crc kubenswrapper[4858]: I0127 21:02:43.086764 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t7m7" event={"ID":"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0","Type":"ContainerStarted","Data":"491f18902e84477f9bf23797f21fb6de5ae0c61063bd320564f73a5b08074522"} Jan 27 21:02:47 crc kubenswrapper[4858]: I0127 21:02:47.137976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t7m7" event={"ID":"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0","Type":"ContainerStarted","Data":"23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25"} Jan 27 21:03:12 crc kubenswrapper[4858]: I0127 21:03:12.420799 4858 generic.go:334] "Generic (PLEG): container finished" podID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerID="23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25" exitCode=0 Jan 27 21:03:12 crc kubenswrapper[4858]: I0127 21:03:12.420857 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t7m7" event={"ID":"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0","Type":"ContainerDied","Data":"23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25"} Jan 27 21:03:12 crc kubenswrapper[4858]: I0127 21:03:12.425064 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:03:13 crc kubenswrapper[4858]: I0127 21:03:13.436759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t7m7" event={"ID":"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0","Type":"ContainerStarted","Data":"1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480"} Jan 27 21:03:13 crc kubenswrapper[4858]: I0127 21:03:13.473445 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6t7m7" podStartSLOduration=2.4122760149999998 podStartE2EDuration="32.473415958s" podCreationTimestamp="2026-01-27 21:02:41 +0000 UTC" firstStartedPulling="2026-01-27 21:02:43.088531804 +0000 UTC m=+3307.796347520" lastFinishedPulling="2026-01-27 21:03:13.149671757 +0000 UTC m=+3337.857487463" observedRunningTime="2026-01-27 21:03:13.460923604 +0000 UTC m=+3338.168739310" watchObservedRunningTime="2026-01-27 21:03:13.473415958 +0000 UTC m=+3338.181231664" Jan 27 21:03:22 crc kubenswrapper[4858]: I0127 21:03:22.007799 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:03:22 crc kubenswrapper[4858]: I0127 21:03:22.008395 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:03:23 crc kubenswrapper[4858]: I0127 21:03:23.066266 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6t7m7" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="registry-server" probeResult="failure" output=< Jan 27 21:03:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:03:23 crc kubenswrapper[4858]: > Jan 27 21:03:33 crc kubenswrapper[4858]: I0127 21:03:33.066438 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6t7m7" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="registry-server" probeResult="failure" output=< Jan 27 21:03:33 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:03:33 crc kubenswrapper[4858]: > Jan 27 21:03:43 crc kubenswrapper[4858]: I0127 21:03:43.056424 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6t7m7" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="registry-server" probeResult="failure" output=< Jan 27 21:03:43 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:03:43 crc kubenswrapper[4858]: > Jan 27 21:03:52 crc kubenswrapper[4858]: I0127 21:03:52.059740 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:03:52 crc kubenswrapper[4858]: I0127 21:03:52.115121 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:03:52 crc kubenswrapper[4858]: I0127 21:03:52.295752 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t7m7"] Jan 27 21:03:53 crc kubenswrapper[4858]: I0127 21:03:53.865005 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6t7m7" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="registry-server" containerID="cri-o://1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480" gracePeriod=2 Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.387107 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.515041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-utilities\") pod \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.515155 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6cfb\" (UniqueName: \"kubernetes.io/projected/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-kube-api-access-s6cfb\") pod \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.515259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-catalog-content\") pod \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\" (UID: \"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0\") " Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.515864 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-utilities" (OuterVolumeSpecName: "utilities") pod "0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" (UID: "0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.520734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-kube-api-access-s6cfb" (OuterVolumeSpecName: "kube-api-access-s6cfb") pod "0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" (UID: "0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0"). InnerVolumeSpecName "kube-api-access-s6cfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.617510 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.617565 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6cfb\" (UniqueName: \"kubernetes.io/projected/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-kube-api-access-s6cfb\") on node \"crc\" DevicePath \"\"" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.660033 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" (UID: "0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.719544 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.873767 4858 generic.go:334] "Generic (PLEG): container finished" podID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerID="1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480" exitCode=0 Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.873816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t7m7" event={"ID":"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0","Type":"ContainerDied","Data":"1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480"} Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.873826 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6t7m7" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.873850 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6t7m7" event={"ID":"0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0","Type":"ContainerDied","Data":"491f18902e84477f9bf23797f21fb6de5ae0c61063bd320564f73a5b08074522"} Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.873872 4858 scope.go:117] "RemoveContainer" containerID="1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.912403 4858 scope.go:117] "RemoveContainer" containerID="23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.913330 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6t7m7"] Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.927156 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6t7m7"] Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.938676 4858 scope.go:117] "RemoveContainer" containerID="f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.979201 4858 scope.go:117] "RemoveContainer" containerID="1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480" Jan 27 21:03:54 crc kubenswrapper[4858]: E0127 21:03:54.979613 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480\": container with ID starting with 1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480 not found: ID does not exist" containerID="1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.979649 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480"} err="failed to get container status \"1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480\": rpc error: code = NotFound desc = could not find container \"1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480\": container with ID starting with 1d3fb52774023df8704969f7616864955a31eb86ecdb1ae14d8dda56f4466480 not found: ID does not exist" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.979676 4858 scope.go:117] "RemoveContainer" containerID="23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25" Jan 27 21:03:54 crc kubenswrapper[4858]: E0127 21:03:54.979987 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25\": container with ID starting with 23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25 not found: ID does not exist" containerID="23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.980011 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25"} err="failed to get container status \"23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25\": rpc error: code = NotFound desc = could not find container \"23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25\": container with ID starting with 23fdc63e4d667ff75e95e65e3a487622a3aa5e651b9248a2d41d5e0666ea2e25 not found: ID does not exist" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.980027 4858 scope.go:117] "RemoveContainer" containerID="f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa" Jan 27 21:03:54 crc kubenswrapper[4858]: E0127 21:03:54.980316 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa\": container with ID starting with f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa not found: ID does not exist" containerID="f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa" Jan 27 21:03:54 crc kubenswrapper[4858]: I0127 21:03:54.980343 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa"} err="failed to get container status \"f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa\": rpc error: code = NotFound desc = could not find container \"f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa\": container with ID starting with f616eb70ea46d38db353b218e8acddc7a72e750bf5bbf125091572ccc2f0d9aa not found: ID does not exist" Jan 27 21:03:56 crc kubenswrapper[4858]: I0127 21:03:56.082580 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" path="/var/lib/kubelet/pods/0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0/volumes" Jan 27 21:03:59 crc kubenswrapper[4858]: I0127 21:03:59.328480 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:03:59 crc kubenswrapper[4858]: I0127 21:03:59.329046 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:04:29 crc kubenswrapper[4858]: I0127 21:04:29.328810 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:04:29 crc kubenswrapper[4858]: I0127 21:04:29.329454 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.329321 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.329831 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.329874 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.330639 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0221581f682274ce510396feeb422f7fc8b447cf51bc13911aa2ff45fb1be6dd"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.330690 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://0221581f682274ce510396feeb422f7fc8b447cf51bc13911aa2ff45fb1be6dd" gracePeriod=600 Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.509291 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="0221581f682274ce510396feeb422f7fc8b447cf51bc13911aa2ff45fb1be6dd" exitCode=0 Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.509337 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"0221581f682274ce510396feeb422f7fc8b447cf51bc13911aa2ff45fb1be6dd"} Jan 27 21:04:59 crc kubenswrapper[4858]: I0127 21:04:59.509852 4858 scope.go:117] "RemoveContainer" containerID="eeae9afce4346c2ce8460e937a38f54f780a4faef531fad48c011e845a5b91f9" Jan 27 21:05:00 crc kubenswrapper[4858]: I0127 21:05:00.526167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9"} Jan 27 21:06:59 crc kubenswrapper[4858]: I0127 21:06:59.328896 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:06:59 crc kubenswrapper[4858]: I0127 21:06:59.329487 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:07:29 crc kubenswrapper[4858]: I0127 21:07:29.329449 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:07:29 crc kubenswrapper[4858]: I0127 21:07:29.329896 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:07:59 crc kubenswrapper[4858]: I0127 21:07:59.328781 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:07:59 crc kubenswrapper[4858]: I0127 21:07:59.329706 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:07:59 crc kubenswrapper[4858]: I0127 21:07:59.329757 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:07:59 crc kubenswrapper[4858]: I0127 21:07:59.330478 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:07:59 crc kubenswrapper[4858]: I0127 21:07:59.330533 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" gracePeriod=600 Jan 27 21:07:59 crc kubenswrapper[4858]: E0127 21:07:59.455720 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:08:00 crc kubenswrapper[4858]: I0127 21:08:00.462128 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" exitCode=0 Jan 27 21:08:00 crc kubenswrapper[4858]: I0127 21:08:00.462487 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9"} Jan 27 21:08:00 crc kubenswrapper[4858]: I0127 21:08:00.462864 4858 scope.go:117] "RemoveContainer" containerID="0221581f682274ce510396feeb422f7fc8b447cf51bc13911aa2ff45fb1be6dd" Jan 27 21:08:00 crc kubenswrapper[4858]: I0127 21:08:00.464497 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:08:00 crc kubenswrapper[4858]: E0127 21:08:00.465141 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:08:12 crc kubenswrapper[4858]: I0127 21:08:12.072070 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:08:12 crc kubenswrapper[4858]: E0127 21:08:12.073217 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:08:23 crc kubenswrapper[4858]: I0127 21:08:23.072386 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:08:23 crc kubenswrapper[4858]: E0127 21:08:23.073713 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:08:38 crc kubenswrapper[4858]: I0127 21:08:38.071351 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:08:38 crc kubenswrapper[4858]: E0127 21:08:38.072332 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:08:51 crc kubenswrapper[4858]: I0127 21:08:51.071645 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:08:51 crc kubenswrapper[4858]: E0127 21:08:51.072564 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:09:05 crc kubenswrapper[4858]: I0127 21:09:05.072240 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:09:05 crc kubenswrapper[4858]: E0127 21:09:05.073730 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:09:19 crc kubenswrapper[4858]: I0127 21:09:19.073535 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:09:19 crc kubenswrapper[4858]: E0127 21:09:19.074307 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.522347 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7wwn"] Jan 27 21:09:23 crc kubenswrapper[4858]: E0127 21:09:23.524639 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="extract-utilities" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.524912 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="extract-utilities" Jan 27 21:09:23 crc kubenswrapper[4858]: E0127 21:09:23.525022 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="registry-server" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.525036 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="registry-server" Jan 27 21:09:23 crc kubenswrapper[4858]: E0127 21:09:23.525109 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="extract-content" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.525122 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="extract-content" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.525879 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e0eeca4-0dc1-4e7f-9a71-784b1e1d5be0" containerName="registry-server" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.529473 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.555748 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7wwn"] Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.621893 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-catalog-content\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.622111 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzvd\" (UniqueName: \"kubernetes.io/projected/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-kube-api-access-xrzvd\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.622294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-utilities\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.724535 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-catalog-content\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.724657 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrzvd\" (UniqueName: \"kubernetes.io/projected/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-kube-api-access-xrzvd\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.724725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-utilities\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.725254 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-catalog-content\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.725269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-utilities\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.752627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrzvd\" (UniqueName: \"kubernetes.io/projected/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-kube-api-access-xrzvd\") pod \"certified-operators-z7wwn\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:23 crc kubenswrapper[4858]: I0127 21:09:23.876309 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:25 crc kubenswrapper[4858]: I0127 21:09:25.120838 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7wwn"] Jan 27 21:09:25 crc kubenswrapper[4858]: I0127 21:09:25.350860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7wwn" event={"ID":"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682","Type":"ContainerStarted","Data":"4580505cf0d9d734fc4a35a2c64db2a00beeaf7bb6039de110faa93a798f986d"} Jan 27 21:09:26 crc kubenswrapper[4858]: I0127 21:09:26.363967 4858 generic.go:334] "Generic (PLEG): container finished" podID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerID="cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340" exitCode=0 Jan 27 21:09:26 crc kubenswrapper[4858]: I0127 21:09:26.364073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7wwn" event={"ID":"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682","Type":"ContainerDied","Data":"cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340"} Jan 27 21:09:26 crc kubenswrapper[4858]: I0127 21:09:26.366825 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:09:27 crc kubenswrapper[4858]: I0127 21:09:27.381330 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7wwn" event={"ID":"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682","Type":"ContainerStarted","Data":"84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1"} Jan 27 21:09:29 crc kubenswrapper[4858]: I0127 21:09:29.404458 4858 generic.go:334] "Generic (PLEG): container finished" podID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerID="84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1" exitCode=0 Jan 27 21:09:29 crc kubenswrapper[4858]: I0127 21:09:29.404565 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7wwn" event={"ID":"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682","Type":"ContainerDied","Data":"84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1"} Jan 27 21:09:30 crc kubenswrapper[4858]: I0127 21:09:30.072408 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:09:30 crc kubenswrapper[4858]: E0127 21:09:30.073136 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:09:30 crc kubenswrapper[4858]: I0127 21:09:30.417381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7wwn" event={"ID":"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682","Type":"ContainerStarted","Data":"7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0"} Jan 27 21:09:30 crc kubenswrapper[4858]: I0127 21:09:30.447359 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z7wwn" podStartSLOduration=4.004056115 podStartE2EDuration="7.447333566s" podCreationTimestamp="2026-01-27 21:09:23 +0000 UTC" firstStartedPulling="2026-01-27 21:09:26.366561109 +0000 UTC m=+3711.074376815" lastFinishedPulling="2026-01-27 21:09:29.80983856 +0000 UTC m=+3714.517654266" observedRunningTime="2026-01-27 21:09:30.437516398 +0000 UTC m=+3715.145332124" watchObservedRunningTime="2026-01-27 21:09:30.447333566 +0000 UTC m=+3715.155149302" Jan 27 21:09:33 crc kubenswrapper[4858]: I0127 21:09:33.876837 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:33 crc kubenswrapper[4858]: I0127 21:09:33.877396 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:33 crc kubenswrapper[4858]: I0127 21:09:33.935787 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:43 crc kubenswrapper[4858]: I0127 21:09:43.942078 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:44 crc kubenswrapper[4858]: I0127 21:09:44.071456 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:09:44 crc kubenswrapper[4858]: E0127 21:09:44.071795 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:09:46 crc kubenswrapper[4858]: I0127 21:09:46.656841 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7wwn"] Jan 27 21:09:46 crc kubenswrapper[4858]: I0127 21:09:46.657827 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z7wwn" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="registry-server" containerID="cri-o://7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0" gracePeriod=2 Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.177077 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.229643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-utilities\") pod \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.230218 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-catalog-content\") pod \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.230321 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrzvd\" (UniqueName: \"kubernetes.io/projected/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-kube-api-access-xrzvd\") pod \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\" (UID: \"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682\") " Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.230889 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-utilities" (OuterVolumeSpecName: "utilities") pod "a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" (UID: "a8ea91d1-7cb8-41c5-9dd2-0d28b4219682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.238833 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-kube-api-access-xrzvd" (OuterVolumeSpecName: "kube-api-access-xrzvd") pod "a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" (UID: "a8ea91d1-7cb8-41c5-9dd2-0d28b4219682"). InnerVolumeSpecName "kube-api-access-xrzvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.286132 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" (UID: "a8ea91d1-7cb8-41c5-9dd2-0d28b4219682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.333074 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.333134 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.333161 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrzvd\" (UniqueName: \"kubernetes.io/projected/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682-kube-api-access-xrzvd\") on node \"crc\" DevicePath \"\"" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.610630 4858 generic.go:334] "Generic (PLEG): container finished" podID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerID="7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0" exitCode=0 Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.610779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7wwn" event={"ID":"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682","Type":"ContainerDied","Data":"7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0"} Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.610837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7wwn" event={"ID":"a8ea91d1-7cb8-41c5-9dd2-0d28b4219682","Type":"ContainerDied","Data":"4580505cf0d9d734fc4a35a2c64db2a00beeaf7bb6039de110faa93a798f986d"} Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.610878 4858 scope.go:117] "RemoveContainer" containerID="7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.611201 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7wwn" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.635673 4858 scope.go:117] "RemoveContainer" containerID="84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.666840 4858 scope.go:117] "RemoveContainer" containerID="cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.670383 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7wwn"] Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.683507 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z7wwn"] Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.723566 4858 scope.go:117] "RemoveContainer" containerID="7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0" Jan 27 21:09:47 crc kubenswrapper[4858]: E0127 21:09:47.724073 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0\": container with ID starting with 7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0 not found: ID does not exist" containerID="7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.724161 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0"} err="failed to get container status \"7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0\": rpc error: code = NotFound desc = could not find container \"7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0\": container with ID starting with 7c081da41b1aab9b19827c2e5a993f54290134929f4ac1a90dcb16462b6326f0 not found: ID does not exist" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.724240 4858 scope.go:117] "RemoveContainer" containerID="84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1" Jan 27 21:09:47 crc kubenswrapper[4858]: E0127 21:09:47.724750 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1\": container with ID starting with 84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1 not found: ID does not exist" containerID="84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.724801 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1"} err="failed to get container status \"84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1\": rpc error: code = NotFound desc = could not find container \"84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1\": container with ID starting with 84de919f1ff277c33fb273fddb81a3c305771df55c9e390f65170001885c15e1 not found: ID does not exist" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.724839 4858 scope.go:117] "RemoveContainer" containerID="cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340" Jan 27 21:09:47 crc kubenswrapper[4858]: E0127 21:09:47.725169 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340\": container with ID starting with cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340 not found: ID does not exist" containerID="cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340" Jan 27 21:09:47 crc kubenswrapper[4858]: I0127 21:09:47.725222 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340"} err="failed to get container status \"cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340\": rpc error: code = NotFound desc = could not find container \"cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340\": container with ID starting with cc9304fa4f886d998de33b920fd8470017236f97a6564db4012ceef6fb9f0340 not found: ID does not exist" Jan 27 21:09:48 crc kubenswrapper[4858]: I0127 21:09:48.084573 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" path="/var/lib/kubelet/pods/a8ea91d1-7cb8-41c5-9dd2-0d28b4219682/volumes" Jan 27 21:09:55 crc kubenswrapper[4858]: I0127 21:09:55.072148 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:09:55 crc kubenswrapper[4858]: E0127 21:09:55.073387 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:10:06 crc kubenswrapper[4858]: I0127 21:10:06.108495 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:10:06 crc kubenswrapper[4858]: E0127 21:10:06.109661 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:10:20 crc kubenswrapper[4858]: I0127 21:10:20.071914 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:10:20 crc kubenswrapper[4858]: E0127 21:10:20.072674 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:10:32 crc kubenswrapper[4858]: I0127 21:10:32.070653 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:10:32 crc kubenswrapper[4858]: E0127 21:10:32.071444 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:10:45 crc kubenswrapper[4858]: I0127 21:10:45.072441 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:10:45 crc kubenswrapper[4858]: E0127 21:10:45.074531 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:11:00 crc kubenswrapper[4858]: I0127 21:11:00.071298 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:11:00 crc kubenswrapper[4858]: E0127 21:11:00.072297 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:11:13 crc kubenswrapper[4858]: I0127 21:11:13.072229 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:11:13 crc kubenswrapper[4858]: E0127 21:11:13.073773 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:11:24 crc kubenswrapper[4858]: I0127 21:11:24.072514 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:11:24 crc kubenswrapper[4858]: E0127 21:11:24.076329 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:11:39 crc kubenswrapper[4858]: I0127 21:11:39.071789 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:11:39 crc kubenswrapper[4858]: E0127 21:11:39.072984 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:11:54 crc kubenswrapper[4858]: I0127 21:11:54.073364 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:11:54 crc kubenswrapper[4858]: E0127 21:11:54.074165 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:12:09 crc kubenswrapper[4858]: I0127 21:12:09.072294 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:12:09 crc kubenswrapper[4858]: E0127 21:12:09.073974 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:12:23 crc kubenswrapper[4858]: I0127 21:12:23.071125 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:12:23 crc kubenswrapper[4858]: E0127 21:12:23.071797 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:12:38 crc kubenswrapper[4858]: I0127 21:12:38.071943 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:12:38 crc kubenswrapper[4858]: E0127 21:12:38.073188 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:12:51 crc kubenswrapper[4858]: I0127 21:12:51.072474 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:12:51 crc kubenswrapper[4858]: E0127 21:12:51.073347 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.219021 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6h9qp"] Jan 27 21:12:56 crc kubenswrapper[4858]: E0127 21:12:56.226228 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="extract-utilities" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.226366 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="extract-utilities" Jan 27 21:12:56 crc kubenswrapper[4858]: E0127 21:12:56.226490 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="extract-content" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.226603 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="extract-content" Jan 27 21:12:56 crc kubenswrapper[4858]: E0127 21:12:56.226720 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="registry-server" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.226804 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="registry-server" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.227264 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ea91d1-7cb8-41c5-9dd2-0d28b4219682" containerName="registry-server" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.229824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.246903 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6h9qp"] Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.326323 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-utilities\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.326410 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2kvj\" (UniqueName: \"kubernetes.io/projected/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-kube-api-access-d2kvj\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.326756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-catalog-content\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.428629 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-utilities\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.428749 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2kvj\" (UniqueName: \"kubernetes.io/projected/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-kube-api-access-d2kvj\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.428817 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-catalog-content\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.429272 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-utilities\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.429337 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-catalog-content\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.452443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2kvj\" (UniqueName: \"kubernetes.io/projected/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-kube-api-access-d2kvj\") pod \"community-operators-6h9qp\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:56 crc kubenswrapper[4858]: I0127 21:12:56.588983 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:12:57 crc kubenswrapper[4858]: I0127 21:12:57.280022 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6h9qp"] Jan 27 21:12:57 crc kubenswrapper[4858]: I0127 21:12:57.617787 4858 generic.go:334] "Generic (PLEG): container finished" podID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerID="5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902" exitCode=0 Jan 27 21:12:57 crc kubenswrapper[4858]: I0127 21:12:57.617845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9qp" event={"ID":"edbbb70d-b8f9-494f-a89d-5a5e91a9f112","Type":"ContainerDied","Data":"5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902"} Jan 27 21:12:57 crc kubenswrapper[4858]: I0127 21:12:57.617881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9qp" event={"ID":"edbbb70d-b8f9-494f-a89d-5a5e91a9f112","Type":"ContainerStarted","Data":"4f794d2e0c889badaa13332083d368b8ca54d0776dade3401d1e511936853e06"} Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.624272 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-trnl8"] Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.630839 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.643282 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trnl8"] Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.711692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-catalog-content\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.712004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz8fw\" (UniqueName: \"kubernetes.io/projected/c33833b9-cab2-4b3e-995c-0aebf87012ea-kube-api-access-tz8fw\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.712368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-utilities\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.816044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-catalog-content\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.816151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz8fw\" (UniqueName: \"kubernetes.io/projected/c33833b9-cab2-4b3e-995c-0aebf87012ea-kube-api-access-tz8fw\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.816210 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-utilities\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.817258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-utilities\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.817593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-catalog-content\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.858130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz8fw\" (UniqueName: \"kubernetes.io/projected/c33833b9-cab2-4b3e-995c-0aebf87012ea-kube-api-access-tz8fw\") pod \"redhat-marketplace-trnl8\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:58 crc kubenswrapper[4858]: I0127 21:12:58.959928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.238131 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8sl4v"] Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.241437 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.279822 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sl4v"] Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.331946 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-catalog-content\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.332013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm2l8\" (UniqueName: \"kubernetes.io/projected/262e3465-bb82-42e3-a84c-146a5101052b-kube-api-access-mm2l8\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.332098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-utilities\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.433962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-catalog-content\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.434012 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm2l8\" (UniqueName: \"kubernetes.io/projected/262e3465-bb82-42e3-a84c-146a5101052b-kube-api-access-mm2l8\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.434085 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-utilities\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.434635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-utilities\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.434848 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-catalog-content\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.473230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm2l8\" (UniqueName: \"kubernetes.io/projected/262e3465-bb82-42e3-a84c-146a5101052b-kube-api-access-mm2l8\") pod \"redhat-operators-8sl4v\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.573065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.654327 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9qp" event={"ID":"edbbb70d-b8f9-494f-a89d-5a5e91a9f112","Type":"ContainerStarted","Data":"cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd"} Jan 27 21:12:59 crc kubenswrapper[4858]: I0127 21:12:59.661921 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-trnl8"] Jan 27 21:12:59 crc kubenswrapper[4858]: W0127 21:12:59.684754 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc33833b9_cab2_4b3e_995c_0aebf87012ea.slice/crio-78b2857425185d4f9c55ca6616b3daf3f712b6df6c1d95745e4dd4df7289fc08 WatchSource:0}: Error finding container 78b2857425185d4f9c55ca6616b3daf3f712b6df6c1d95745e4dd4df7289fc08: Status 404 returned error can't find the container with id 78b2857425185d4f9c55ca6616b3daf3f712b6df6c1d95745e4dd4df7289fc08 Jan 27 21:13:00 crc kubenswrapper[4858]: I0127 21:13:00.175043 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8sl4v"] Jan 27 21:13:00 crc kubenswrapper[4858]: I0127 21:13:00.694353 4858 generic.go:334] "Generic (PLEG): container finished" podID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerID="43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a" exitCode=0 Jan 27 21:13:00 crc kubenswrapper[4858]: I0127 21:13:00.694425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trnl8" event={"ID":"c33833b9-cab2-4b3e-995c-0aebf87012ea","Type":"ContainerDied","Data":"43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a"} Jan 27 21:13:00 crc kubenswrapper[4858]: I0127 21:13:00.694808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trnl8" event={"ID":"c33833b9-cab2-4b3e-995c-0aebf87012ea","Type":"ContainerStarted","Data":"78b2857425185d4f9c55ca6616b3daf3f712b6df6c1d95745e4dd4df7289fc08"} Jan 27 21:13:00 crc kubenswrapper[4858]: I0127 21:13:00.705396 4858 generic.go:334] "Generic (PLEG): container finished" podID="262e3465-bb82-42e3-a84c-146a5101052b" containerID="5600296eb03514a199d7569728e80da773d301cd015fe71bc9469dfbae8fed35" exitCode=0 Jan 27 21:13:00 crc kubenswrapper[4858]: I0127 21:13:00.705498 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sl4v" event={"ID":"262e3465-bb82-42e3-a84c-146a5101052b","Type":"ContainerDied","Data":"5600296eb03514a199d7569728e80da773d301cd015fe71bc9469dfbae8fed35"} Jan 27 21:13:00 crc kubenswrapper[4858]: I0127 21:13:00.705536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sl4v" event={"ID":"262e3465-bb82-42e3-a84c-146a5101052b","Type":"ContainerStarted","Data":"aab3c8bb90b3902be6fc52e169354e5d6473419d2e7956913a64c4a80832380b"} Jan 27 21:13:01 crc kubenswrapper[4858]: I0127 21:13:01.715763 4858 generic.go:334] "Generic (PLEG): container finished" podID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerID="cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd" exitCode=0 Jan 27 21:13:01 crc kubenswrapper[4858]: I0127 21:13:01.715824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9qp" event={"ID":"edbbb70d-b8f9-494f-a89d-5a5e91a9f112","Type":"ContainerDied","Data":"cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd"} Jan 27 21:13:02 crc kubenswrapper[4858]: I0127 21:13:02.729322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9qp" event={"ID":"edbbb70d-b8f9-494f-a89d-5a5e91a9f112","Type":"ContainerStarted","Data":"cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119"} Jan 27 21:13:02 crc kubenswrapper[4858]: I0127 21:13:02.732378 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trnl8" event={"ID":"c33833b9-cab2-4b3e-995c-0aebf87012ea","Type":"ContainerStarted","Data":"dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8"} Jan 27 21:13:02 crc kubenswrapper[4858]: I0127 21:13:02.735663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sl4v" event={"ID":"262e3465-bb82-42e3-a84c-146a5101052b","Type":"ContainerStarted","Data":"320bf2b675d808e1ad4a7b322f869d9b698d3e7b1e994092a5346df31fc06e79"} Jan 27 21:13:03 crc kubenswrapper[4858]: I0127 21:13:03.072121 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:13:03 crc kubenswrapper[4858]: I0127 21:13:03.746689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"6cdfc3b21a124ca7f68a81e07cc50c1fe8c612ea7759b765a444cd690e100b72"} Jan 27 21:13:03 crc kubenswrapper[4858]: I0127 21:13:03.771099 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6h9qp" podStartSLOduration=3.095180854 podStartE2EDuration="7.77107144s" podCreationTimestamp="2026-01-27 21:12:56 +0000 UTC" firstStartedPulling="2026-01-27 21:12:57.620314365 +0000 UTC m=+3922.328130071" lastFinishedPulling="2026-01-27 21:13:02.296204951 +0000 UTC m=+3927.004020657" observedRunningTime="2026-01-27 21:13:03.763908467 +0000 UTC m=+3928.471724183" watchObservedRunningTime="2026-01-27 21:13:03.77107144 +0000 UTC m=+3928.478887146" Jan 27 21:13:05 crc kubenswrapper[4858]: I0127 21:13:05.767941 4858 generic.go:334] "Generic (PLEG): container finished" podID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerID="dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8" exitCode=0 Jan 27 21:13:05 crc kubenswrapper[4858]: I0127 21:13:05.768002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trnl8" event={"ID":"c33833b9-cab2-4b3e-995c-0aebf87012ea","Type":"ContainerDied","Data":"dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8"} Jan 27 21:13:06 crc kubenswrapper[4858]: I0127 21:13:06.589371 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:13:06 crc kubenswrapper[4858]: I0127 21:13:06.589998 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:13:07 crc kubenswrapper[4858]: I0127 21:13:07.663422 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6h9qp" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="registry-server" probeResult="failure" output=< Jan 27 21:13:07 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:13:07 crc kubenswrapper[4858]: > Jan 27 21:13:07 crc kubenswrapper[4858]: I0127 21:13:07.792189 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trnl8" event={"ID":"c33833b9-cab2-4b3e-995c-0aebf87012ea","Type":"ContainerStarted","Data":"d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0"} Jan 27 21:13:07 crc kubenswrapper[4858]: I0127 21:13:07.819071 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-trnl8" podStartSLOduration=3.57263459 podStartE2EDuration="9.819047682s" podCreationTimestamp="2026-01-27 21:12:58 +0000 UTC" firstStartedPulling="2026-01-27 21:13:00.699974589 +0000 UTC m=+3925.407790295" lastFinishedPulling="2026-01-27 21:13:06.946387671 +0000 UTC m=+3931.654203387" observedRunningTime="2026-01-27 21:13:07.81614359 +0000 UTC m=+3932.523959326" watchObservedRunningTime="2026-01-27 21:13:07.819047682 +0000 UTC m=+3932.526863388" Jan 27 21:13:08 crc kubenswrapper[4858]: I0127 21:13:08.961206 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:13:08 crc kubenswrapper[4858]: I0127 21:13:08.961785 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:13:10 crc kubenswrapper[4858]: I0127 21:13:10.020379 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-trnl8" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="registry-server" probeResult="failure" output=< Jan 27 21:13:10 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:13:10 crc kubenswrapper[4858]: > Jan 27 21:13:11 crc kubenswrapper[4858]: I0127 21:13:11.830362 4858 generic.go:334] "Generic (PLEG): container finished" podID="262e3465-bb82-42e3-a84c-146a5101052b" containerID="320bf2b675d808e1ad4a7b322f869d9b698d3e7b1e994092a5346df31fc06e79" exitCode=0 Jan 27 21:13:11 crc kubenswrapper[4858]: I0127 21:13:11.830451 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sl4v" event={"ID":"262e3465-bb82-42e3-a84c-146a5101052b","Type":"ContainerDied","Data":"320bf2b675d808e1ad4a7b322f869d9b698d3e7b1e994092a5346df31fc06e79"} Jan 27 21:13:12 crc kubenswrapper[4858]: I0127 21:13:12.844648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sl4v" event={"ID":"262e3465-bb82-42e3-a84c-146a5101052b","Type":"ContainerStarted","Data":"d09d57805c8537cf8638f1034d6329380a53b45cdde9d580917fd7c0f0fcb82e"} Jan 27 21:13:12 crc kubenswrapper[4858]: I0127 21:13:12.876959 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8sl4v" podStartSLOduration=2.386003928 podStartE2EDuration="13.876929536s" podCreationTimestamp="2026-01-27 21:12:59 +0000 UTC" firstStartedPulling="2026-01-27 21:13:00.709323473 +0000 UTC m=+3925.417139179" lastFinishedPulling="2026-01-27 21:13:12.200249081 +0000 UTC m=+3936.908064787" observedRunningTime="2026-01-27 21:13:12.865873803 +0000 UTC m=+3937.573689539" watchObservedRunningTime="2026-01-27 21:13:12.876929536 +0000 UTC m=+3937.584745242" Jan 27 21:13:17 crc kubenswrapper[4858]: I0127 21:13:17.640984 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6h9qp" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="registry-server" probeResult="failure" output=< Jan 27 21:13:17 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:13:17 crc kubenswrapper[4858]: > Jan 27 21:13:19 crc kubenswrapper[4858]: I0127 21:13:19.019512 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:13:19 crc kubenswrapper[4858]: I0127 21:13:19.088952 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:13:19 crc kubenswrapper[4858]: I0127 21:13:19.262041 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trnl8"] Jan 27 21:13:19 crc kubenswrapper[4858]: I0127 21:13:19.573695 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:13:19 crc kubenswrapper[4858]: I0127 21:13:19.573745 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:13:20 crc kubenswrapper[4858]: I0127 21:13:20.627071 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8sl4v" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="registry-server" probeResult="failure" output=< Jan 27 21:13:20 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:13:20 crc kubenswrapper[4858]: > Jan 27 21:13:20 crc kubenswrapper[4858]: I0127 21:13:20.939637 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-trnl8" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="registry-server" containerID="cri-o://d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0" gracePeriod=2 Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.572808 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.646008 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-utilities\") pod \"c33833b9-cab2-4b3e-995c-0aebf87012ea\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.646203 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz8fw\" (UniqueName: \"kubernetes.io/projected/c33833b9-cab2-4b3e-995c-0aebf87012ea-kube-api-access-tz8fw\") pod \"c33833b9-cab2-4b3e-995c-0aebf87012ea\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.646266 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-catalog-content\") pod \"c33833b9-cab2-4b3e-995c-0aebf87012ea\" (UID: \"c33833b9-cab2-4b3e-995c-0aebf87012ea\") " Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.646783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-utilities" (OuterVolumeSpecName: "utilities") pod "c33833b9-cab2-4b3e-995c-0aebf87012ea" (UID: "c33833b9-cab2-4b3e-995c-0aebf87012ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.647191 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.658812 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33833b9-cab2-4b3e-995c-0aebf87012ea-kube-api-access-tz8fw" (OuterVolumeSpecName: "kube-api-access-tz8fw") pod "c33833b9-cab2-4b3e-995c-0aebf87012ea" (UID: "c33833b9-cab2-4b3e-995c-0aebf87012ea"). InnerVolumeSpecName "kube-api-access-tz8fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.676376 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c33833b9-cab2-4b3e-995c-0aebf87012ea" (UID: "c33833b9-cab2-4b3e-995c-0aebf87012ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.749130 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz8fw\" (UniqueName: \"kubernetes.io/projected/c33833b9-cab2-4b3e-995c-0aebf87012ea-kube-api-access-tz8fw\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.749168 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33833b9-cab2-4b3e-995c-0aebf87012ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.950458 4858 generic.go:334] "Generic (PLEG): container finished" podID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerID="d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0" exitCode=0 Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.950494 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trnl8" event={"ID":"c33833b9-cab2-4b3e-995c-0aebf87012ea","Type":"ContainerDied","Data":"d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0"} Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.950520 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-trnl8" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.950527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-trnl8" event={"ID":"c33833b9-cab2-4b3e-995c-0aebf87012ea","Type":"ContainerDied","Data":"78b2857425185d4f9c55ca6616b3daf3f712b6df6c1d95745e4dd4df7289fc08"} Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.950538 4858 scope.go:117] "RemoveContainer" containerID="d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.972074 4858 scope.go:117] "RemoveContainer" containerID="dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8" Jan 27 21:13:21 crc kubenswrapper[4858]: I0127 21:13:21.993773 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-trnl8"] Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.007006 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-trnl8"] Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.013817 4858 scope.go:117] "RemoveContainer" containerID="43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a" Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.050028 4858 scope.go:117] "RemoveContainer" containerID="d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0" Jan 27 21:13:22 crc kubenswrapper[4858]: E0127 21:13:22.050714 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0\": container with ID starting with d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0 not found: ID does not exist" containerID="d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0" Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.050754 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0"} err="failed to get container status \"d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0\": rpc error: code = NotFound desc = could not find container \"d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0\": container with ID starting with d99692117c85cd571e06f3778ff5874312f993b50c8473c9dad5b2c176e099f0 not found: ID does not exist" Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.050783 4858 scope.go:117] "RemoveContainer" containerID="dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8" Jan 27 21:13:22 crc kubenswrapper[4858]: E0127 21:13:22.051273 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8\": container with ID starting with dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8 not found: ID does not exist" containerID="dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8" Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.051305 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8"} err="failed to get container status \"dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8\": rpc error: code = NotFound desc = could not find container \"dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8\": container with ID starting with dae9b038542e2079bdc3b01ccb90dba360eea53e9c24a4f96189d9c47ff749c8 not found: ID does not exist" Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.051327 4858 scope.go:117] "RemoveContainer" containerID="43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a" Jan 27 21:13:22 crc kubenswrapper[4858]: E0127 21:13:22.051888 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a\": container with ID starting with 43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a not found: ID does not exist" containerID="43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a" Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.051919 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a"} err="failed to get container status \"43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a\": rpc error: code = NotFound desc = could not find container \"43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a\": container with ID starting with 43c6985a0db777e49c2a1fd398e0eafcdad5183aaac5f2e9bf73875a0e97cc7a not found: ID does not exist" Jan 27 21:13:22 crc kubenswrapper[4858]: I0127 21:13:22.087845 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" path="/var/lib/kubelet/pods/c33833b9-cab2-4b3e-995c-0aebf87012ea/volumes" Jan 27 21:13:26 crc kubenswrapper[4858]: I0127 21:13:26.663504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:13:26 crc kubenswrapper[4858]: I0127 21:13:26.722160 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:13:27 crc kubenswrapper[4858]: I0127 21:13:27.420914 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6h9qp"] Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.011328 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6h9qp" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="registry-server" containerID="cri-o://cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119" gracePeriod=2 Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.512100 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.647140 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-catalog-content\") pod \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.647566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2kvj\" (UniqueName: \"kubernetes.io/projected/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-kube-api-access-d2kvj\") pod \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.647618 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-utilities\") pod \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\" (UID: \"edbbb70d-b8f9-494f-a89d-5a5e91a9f112\") " Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.648413 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-utilities" (OuterVolumeSpecName: "utilities") pod "edbbb70d-b8f9-494f-a89d-5a5e91a9f112" (UID: "edbbb70d-b8f9-494f-a89d-5a5e91a9f112"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.648700 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.653738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-kube-api-access-d2kvj" (OuterVolumeSpecName: "kube-api-access-d2kvj") pod "edbbb70d-b8f9-494f-a89d-5a5e91a9f112" (UID: "edbbb70d-b8f9-494f-a89d-5a5e91a9f112"). InnerVolumeSpecName "kube-api-access-d2kvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.737796 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edbbb70d-b8f9-494f-a89d-5a5e91a9f112" (UID: "edbbb70d-b8f9-494f-a89d-5a5e91a9f112"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.750284 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2kvj\" (UniqueName: \"kubernetes.io/projected/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-kube-api-access-d2kvj\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:28 crc kubenswrapper[4858]: I0127 21:13:28.750329 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edbbb70d-b8f9-494f-a89d-5a5e91a9f112-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.022327 4858 generic.go:334] "Generic (PLEG): container finished" podID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerID="cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119" exitCode=0 Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.022419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9qp" event={"ID":"edbbb70d-b8f9-494f-a89d-5a5e91a9f112","Type":"ContainerDied","Data":"cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119"} Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.023719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6h9qp" event={"ID":"edbbb70d-b8f9-494f-a89d-5a5e91a9f112","Type":"ContainerDied","Data":"4f794d2e0c889badaa13332083d368b8ca54d0776dade3401d1e511936853e06"} Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.022518 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6h9qp" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.023747 4858 scope.go:117] "RemoveContainer" containerID="cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.046594 4858 scope.go:117] "RemoveContainer" containerID="cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.068502 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6h9qp"] Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.080596 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6h9qp"] Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.106770 4858 scope.go:117] "RemoveContainer" containerID="5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.140077 4858 scope.go:117] "RemoveContainer" containerID="cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119" Jan 27 21:13:29 crc kubenswrapper[4858]: E0127 21:13:29.140616 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119\": container with ID starting with cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119 not found: ID does not exist" containerID="cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.140720 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119"} err="failed to get container status \"cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119\": rpc error: code = NotFound desc = could not find container \"cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119\": container with ID starting with cc0bca135f12b0809fc855cf3850beb4470d60b6f96e83cda64755c580a97119 not found: ID does not exist" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.140817 4858 scope.go:117] "RemoveContainer" containerID="cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd" Jan 27 21:13:29 crc kubenswrapper[4858]: E0127 21:13:29.141294 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd\": container with ID starting with cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd not found: ID does not exist" containerID="cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.141339 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd"} err="failed to get container status \"cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd\": rpc error: code = NotFound desc = could not find container \"cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd\": container with ID starting with cb8e72d7f1ae2a5877bbb6dc222e1e1238eb9d99a9bad529fb35f88df4523efd not found: ID does not exist" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.141370 4858 scope.go:117] "RemoveContainer" containerID="5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902" Jan 27 21:13:29 crc kubenswrapper[4858]: E0127 21:13:29.141890 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902\": container with ID starting with 5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902 not found: ID does not exist" containerID="5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.141920 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902"} err="failed to get container status \"5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902\": rpc error: code = NotFound desc = could not find container \"5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902\": container with ID starting with 5735aa12ba830e31bb3afce12869ba9783055fc3dc9587af5253223823c32902 not found: ID does not exist" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.640279 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:13:29 crc kubenswrapper[4858]: I0127 21:13:29.715821 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:13:30 crc kubenswrapper[4858]: I0127 21:13:30.083354 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" path="/var/lib/kubelet/pods/edbbb70d-b8f9-494f-a89d-5a5e91a9f112/volumes" Jan 27 21:13:32 crc kubenswrapper[4858]: I0127 21:13:32.836171 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sl4v"] Jan 27 21:13:32 crc kubenswrapper[4858]: I0127 21:13:32.837018 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8sl4v" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="registry-server" containerID="cri-o://d09d57805c8537cf8638f1034d6329380a53b45cdde9d580917fd7c0f0fcb82e" gracePeriod=2 Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.081065 4858 generic.go:334] "Generic (PLEG): container finished" podID="262e3465-bb82-42e3-a84c-146a5101052b" containerID="d09d57805c8537cf8638f1034d6329380a53b45cdde9d580917fd7c0f0fcb82e" exitCode=0 Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.081287 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sl4v" event={"ID":"262e3465-bb82-42e3-a84c-146a5101052b","Type":"ContainerDied","Data":"d09d57805c8537cf8638f1034d6329380a53b45cdde9d580917fd7c0f0fcb82e"} Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.342766 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.464810 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-utilities\") pod \"262e3465-bb82-42e3-a84c-146a5101052b\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.464878 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm2l8\" (UniqueName: \"kubernetes.io/projected/262e3465-bb82-42e3-a84c-146a5101052b-kube-api-access-mm2l8\") pod \"262e3465-bb82-42e3-a84c-146a5101052b\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.465178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-catalog-content\") pod \"262e3465-bb82-42e3-a84c-146a5101052b\" (UID: \"262e3465-bb82-42e3-a84c-146a5101052b\") " Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.465740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-utilities" (OuterVolumeSpecName: "utilities") pod "262e3465-bb82-42e3-a84c-146a5101052b" (UID: "262e3465-bb82-42e3-a84c-146a5101052b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.471783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/262e3465-bb82-42e3-a84c-146a5101052b-kube-api-access-mm2l8" (OuterVolumeSpecName: "kube-api-access-mm2l8") pod "262e3465-bb82-42e3-a84c-146a5101052b" (UID: "262e3465-bb82-42e3-a84c-146a5101052b"). InnerVolumeSpecName "kube-api-access-mm2l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.567904 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.567938 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm2l8\" (UniqueName: \"kubernetes.io/projected/262e3465-bb82-42e3-a84c-146a5101052b-kube-api-access-mm2l8\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.614773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "262e3465-bb82-42e3-a84c-146a5101052b" (UID: "262e3465-bb82-42e3-a84c-146a5101052b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:13:33 crc kubenswrapper[4858]: I0127 21:13:33.670150 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/262e3465-bb82-42e3-a84c-146a5101052b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:13:34 crc kubenswrapper[4858]: I0127 21:13:34.101043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8sl4v" event={"ID":"262e3465-bb82-42e3-a84c-146a5101052b","Type":"ContainerDied","Data":"aab3c8bb90b3902be6fc52e169354e5d6473419d2e7956913a64c4a80832380b"} Jan 27 21:13:34 crc kubenswrapper[4858]: I0127 21:13:34.101384 4858 scope.go:117] "RemoveContainer" containerID="d09d57805c8537cf8638f1034d6329380a53b45cdde9d580917fd7c0f0fcb82e" Jan 27 21:13:34 crc kubenswrapper[4858]: I0127 21:13:34.101112 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8sl4v" Jan 27 21:13:34 crc kubenswrapper[4858]: I0127 21:13:34.135631 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8sl4v"] Jan 27 21:13:34 crc kubenswrapper[4858]: I0127 21:13:34.137167 4858 scope.go:117] "RemoveContainer" containerID="320bf2b675d808e1ad4a7b322f869d9b698d3e7b1e994092a5346df31fc06e79" Jan 27 21:13:34 crc kubenswrapper[4858]: I0127 21:13:34.145884 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8sl4v"] Jan 27 21:13:34 crc kubenswrapper[4858]: I0127 21:13:34.159937 4858 scope.go:117] "RemoveContainer" containerID="5600296eb03514a199d7569728e80da773d301cd015fe71bc9469dfbae8fed35" Jan 27 21:13:36 crc kubenswrapper[4858]: I0127 21:13:36.087020 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="262e3465-bb82-42e3-a84c-146a5101052b" path="/var/lib/kubelet/pods/262e3465-bb82-42e3-a84c-146a5101052b/volumes" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.211672 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5"] Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213488 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213511 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213566 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213576 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213599 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213612 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213653 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213662 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213678 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213687 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213738 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213749 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213774 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213810 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="extract-utilities" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213825 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213834 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="extract-content" Jan 27 21:15:00 crc kubenswrapper[4858]: E0127 21:15:00.213856 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.213891 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.214345 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbbb70d-b8f9-494f-a89d-5a5e91a9f112" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.214384 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33833b9-cab2-4b3e-995c-0aebf87012ea" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.214424 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="262e3465-bb82-42e3-a84c-146a5101052b" containerName="registry-server" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.215712 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.219242 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.220469 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.225960 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5"] Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.307112 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05ef9663-c603-418e-976a-4193f1b0f88f-secret-volume\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.307191 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05ef9663-c603-418e-976a-4193f1b0f88f-config-volume\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.307294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb68x\" (UniqueName: \"kubernetes.io/projected/05ef9663-c603-418e-976a-4193f1b0f88f-kube-api-access-jb68x\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.409226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05ef9663-c603-418e-976a-4193f1b0f88f-config-volume\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.409363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb68x\" (UniqueName: \"kubernetes.io/projected/05ef9663-c603-418e-976a-4193f1b0f88f-kube-api-access-jb68x\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.409492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05ef9663-c603-418e-976a-4193f1b0f88f-secret-volume\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.410225 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05ef9663-c603-418e-976a-4193f1b0f88f-config-volume\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.416993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05ef9663-c603-418e-976a-4193f1b0f88f-secret-volume\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.430204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb68x\" (UniqueName: \"kubernetes.io/projected/05ef9663-c603-418e-976a-4193f1b0f88f-kube-api-access-jb68x\") pod \"collect-profiles-29492475-29fn5\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:00 crc kubenswrapper[4858]: I0127 21:15:00.546400 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:01 crc kubenswrapper[4858]: I0127 21:15:01.034761 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5"] Jan 27 21:15:01 crc kubenswrapper[4858]: E0127 21:15:01.554713 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05ef9663_c603_418e_976a_4193f1b0f88f.slice/crio-conmon-6dfec1f5b606621fca16f3d4ee7e619e3e8c28ff884a86504b177e62e09aebf2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05ef9663_c603_418e_976a_4193f1b0f88f.slice/crio-6dfec1f5b606621fca16f3d4ee7e619e3e8c28ff884a86504b177e62e09aebf2.scope\": RecentStats: unable to find data in memory cache]" Jan 27 21:15:01 crc kubenswrapper[4858]: I0127 21:15:01.978311 4858 generic.go:334] "Generic (PLEG): container finished" podID="05ef9663-c603-418e-976a-4193f1b0f88f" containerID="6dfec1f5b606621fca16f3d4ee7e619e3e8c28ff884a86504b177e62e09aebf2" exitCode=0 Jan 27 21:15:01 crc kubenswrapper[4858]: I0127 21:15:01.978378 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" event={"ID":"05ef9663-c603-418e-976a-4193f1b0f88f","Type":"ContainerDied","Data":"6dfec1f5b606621fca16f3d4ee7e619e3e8c28ff884a86504b177e62e09aebf2"} Jan 27 21:15:01 crc kubenswrapper[4858]: I0127 21:15:01.978414 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" event={"ID":"05ef9663-c603-418e-976a-4193f1b0f88f","Type":"ContainerStarted","Data":"eea69b9caab1ae4082bf05e17b65a52ef2050924504436d68ce0d8bd561476bf"} Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.396798 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.526959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb68x\" (UniqueName: \"kubernetes.io/projected/05ef9663-c603-418e-976a-4193f1b0f88f-kube-api-access-jb68x\") pod \"05ef9663-c603-418e-976a-4193f1b0f88f\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.527212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05ef9663-c603-418e-976a-4193f1b0f88f-config-volume\") pod \"05ef9663-c603-418e-976a-4193f1b0f88f\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.527297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05ef9663-c603-418e-976a-4193f1b0f88f-secret-volume\") pod \"05ef9663-c603-418e-976a-4193f1b0f88f\" (UID: \"05ef9663-c603-418e-976a-4193f1b0f88f\") " Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.528106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ef9663-c603-418e-976a-4193f1b0f88f-config-volume" (OuterVolumeSpecName: "config-volume") pod "05ef9663-c603-418e-976a-4193f1b0f88f" (UID: "05ef9663-c603-418e-976a-4193f1b0f88f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.533166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ef9663-c603-418e-976a-4193f1b0f88f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "05ef9663-c603-418e-976a-4193f1b0f88f" (UID: "05ef9663-c603-418e-976a-4193f1b0f88f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.533723 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ef9663-c603-418e-976a-4193f1b0f88f-kube-api-access-jb68x" (OuterVolumeSpecName: "kube-api-access-jb68x") pod "05ef9663-c603-418e-976a-4193f1b0f88f" (UID: "05ef9663-c603-418e-976a-4193f1b0f88f"). InnerVolumeSpecName "kube-api-access-jb68x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.629004 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05ef9663-c603-418e-976a-4193f1b0f88f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.629042 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05ef9663-c603-418e-976a-4193f1b0f88f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:15:03 crc kubenswrapper[4858]: I0127 21:15:03.629059 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb68x\" (UniqueName: \"kubernetes.io/projected/05ef9663-c603-418e-976a-4193f1b0f88f-kube-api-access-jb68x\") on node \"crc\" DevicePath \"\"" Jan 27 21:15:04 crc kubenswrapper[4858]: I0127 21:15:04.004536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" event={"ID":"05ef9663-c603-418e-976a-4193f1b0f88f","Type":"ContainerDied","Data":"eea69b9caab1ae4082bf05e17b65a52ef2050924504436d68ce0d8bd561476bf"} Jan 27 21:15:04 crc kubenswrapper[4858]: I0127 21:15:04.004642 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eea69b9caab1ae4082bf05e17b65a52ef2050924504436d68ce0d8bd561476bf" Jan 27 21:15:04 crc kubenswrapper[4858]: I0127 21:15:04.004658 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5" Jan 27 21:15:04 crc kubenswrapper[4858]: I0127 21:15:04.475814 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75"] Jan 27 21:15:04 crc kubenswrapper[4858]: I0127 21:15:04.487369 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492430-dcj75"] Jan 27 21:15:06 crc kubenswrapper[4858]: I0127 21:15:06.086224 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a65f9c3-3b88-4bab-830f-00ba01b22f20" path="/var/lib/kubelet/pods/6a65f9c3-3b88-4bab-830f-00ba01b22f20/volumes" Jan 27 21:15:29 crc kubenswrapper[4858]: I0127 21:15:29.332352 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:15:29 crc kubenswrapper[4858]: I0127 21:15:29.333844 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:15:59 crc kubenswrapper[4858]: I0127 21:15:59.329285 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:15:59 crc kubenswrapper[4858]: I0127 21:15:59.330225 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:16:01 crc kubenswrapper[4858]: I0127 21:16:01.128341 4858 scope.go:117] "RemoveContainer" containerID="dd316e48f868476ba9b94a82472e4f9be6a0ada9906a96f774e6a7d60dbcdb01" Jan 27 21:16:13 crc kubenswrapper[4858]: I0127 21:16:13.756013 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-574fc98977-sp7zp" podUID="57e04641-598d-459b-9996-0ae4182ae4fb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.328663 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.329490 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.329571 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.330548 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6cdfc3b21a124ca7f68a81e07cc50c1fe8c612ea7759b765a444cd690e100b72"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.330627 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://6cdfc3b21a124ca7f68a81e07cc50c1fe8c612ea7759b765a444cd690e100b72" gracePeriod=600 Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.865224 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="6cdfc3b21a124ca7f68a81e07cc50c1fe8c612ea7759b765a444cd690e100b72" exitCode=0 Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.865531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"6cdfc3b21a124ca7f68a81e07cc50c1fe8c612ea7759b765a444cd690e100b72"} Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.866518 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c"} Jan 27 21:16:29 crc kubenswrapper[4858]: I0127 21:16:29.866572 4858 scope.go:117] "RemoveContainer" containerID="f1446e2ecd3ce99fa302705c7555d3a2daeb595ff125c6b7310e756708208ad9" Jan 27 21:18:29 crc kubenswrapper[4858]: I0127 21:18:29.328890 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:18:29 crc kubenswrapper[4858]: I0127 21:18:29.329446 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:18:59 crc kubenswrapper[4858]: I0127 21:18:59.328940 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:18:59 crc kubenswrapper[4858]: I0127 21:18:59.329713 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:19:23 crc kubenswrapper[4858]: I0127 21:19:23.865467 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-smxjv"] Jan 27 21:19:23 crc kubenswrapper[4858]: E0127 21:19:23.866809 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ef9663-c603-418e-976a-4193f1b0f88f" containerName="collect-profiles" Jan 27 21:19:23 crc kubenswrapper[4858]: I0127 21:19:23.866834 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ef9663-c603-418e-976a-4193f1b0f88f" containerName="collect-profiles" Jan 27 21:19:23 crc kubenswrapper[4858]: I0127 21:19:23.867140 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ef9663-c603-418e-976a-4193f1b0f88f" containerName="collect-profiles" Jan 27 21:19:23 crc kubenswrapper[4858]: I0127 21:19:23.870884 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:23 crc kubenswrapper[4858]: I0127 21:19:23.894936 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smxjv"] Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.013790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-catalog-content\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.014354 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9rpt\" (UniqueName: \"kubernetes.io/projected/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-kube-api-access-c9rpt\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.014610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-utilities\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.116326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-utilities\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.116751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-catalog-content\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.117136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-utilities\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.117270 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9rpt\" (UniqueName: \"kubernetes.io/projected/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-kube-api-access-c9rpt\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.117610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-catalog-content\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.187677 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9rpt\" (UniqueName: \"kubernetes.io/projected/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-kube-api-access-c9rpt\") pod \"certified-operators-smxjv\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.192122 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:24 crc kubenswrapper[4858]: I0127 21:19:24.760320 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-smxjv"] Jan 27 21:19:25 crc kubenswrapper[4858]: I0127 21:19:25.728001 4858 generic.go:334] "Generic (PLEG): container finished" podID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerID="51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819" exitCode=0 Jan 27 21:19:25 crc kubenswrapper[4858]: I0127 21:19:25.728114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smxjv" event={"ID":"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88","Type":"ContainerDied","Data":"51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819"} Jan 27 21:19:25 crc kubenswrapper[4858]: I0127 21:19:25.728473 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smxjv" event={"ID":"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88","Type":"ContainerStarted","Data":"80f751e8b55f87f69e94f27a93e834bbb54d121f78fd50b7d162e582b51e8429"} Jan 27 21:19:25 crc kubenswrapper[4858]: I0127 21:19:25.730674 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:19:27 crc kubenswrapper[4858]: I0127 21:19:27.776787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smxjv" event={"ID":"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88","Type":"ContainerStarted","Data":"67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996"} Jan 27 21:19:28 crc kubenswrapper[4858]: I0127 21:19:28.790397 4858 generic.go:334] "Generic (PLEG): container finished" podID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerID="67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996" exitCode=0 Jan 27 21:19:28 crc kubenswrapper[4858]: I0127 21:19:28.790560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smxjv" event={"ID":"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88","Type":"ContainerDied","Data":"67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996"} Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.329187 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.329739 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.329791 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.330661 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.330725 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" gracePeriod=600 Jan 27 21:19:29 crc kubenswrapper[4858]: E0127 21:19:29.465987 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.803275 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" exitCode=0 Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.803315 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c"} Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.803622 4858 scope.go:117] "RemoveContainer" containerID="6cdfc3b21a124ca7f68a81e07cc50c1fe8c612ea7759b765a444cd690e100b72" Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.804252 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:19:29 crc kubenswrapper[4858]: E0127 21:19:29.804492 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.811328 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smxjv" event={"ID":"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88","Type":"ContainerStarted","Data":"e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef"} Jan 27 21:19:29 crc kubenswrapper[4858]: I0127 21:19:29.871101 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-smxjv" podStartSLOduration=3.410594749 podStartE2EDuration="6.871075651s" podCreationTimestamp="2026-01-27 21:19:23 +0000 UTC" firstStartedPulling="2026-01-27 21:19:25.730392077 +0000 UTC m=+4310.438207783" lastFinishedPulling="2026-01-27 21:19:29.190872979 +0000 UTC m=+4313.898688685" observedRunningTime="2026-01-27 21:19:29.853052052 +0000 UTC m=+4314.560867768" watchObservedRunningTime="2026-01-27 21:19:29.871075651 +0000 UTC m=+4314.578891357" Jan 27 21:19:34 crc kubenswrapper[4858]: I0127 21:19:34.193199 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:34 crc kubenswrapper[4858]: I0127 21:19:34.193774 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:34 crc kubenswrapper[4858]: I0127 21:19:34.242377 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:34 crc kubenswrapper[4858]: I0127 21:19:34.916263 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:34 crc kubenswrapper[4858]: I0127 21:19:34.974691 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-smxjv"] Jan 27 21:19:36 crc kubenswrapper[4858]: I0127 21:19:36.879885 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-smxjv" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="registry-server" containerID="cri-o://e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef" gracePeriod=2 Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.393999 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.454505 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9rpt\" (UniqueName: \"kubernetes.io/projected/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-kube-api-access-c9rpt\") pod \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.454810 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-utilities\") pod \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.454907 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-catalog-content\") pod \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\" (UID: \"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88\") " Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.456100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-utilities" (OuterVolumeSpecName: "utilities") pod "6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" (UID: "6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.469774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-kube-api-access-c9rpt" (OuterVolumeSpecName: "kube-api-access-c9rpt") pod "6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" (UID: "6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88"). InnerVolumeSpecName "kube-api-access-c9rpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.517742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" (UID: "6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.558071 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9rpt\" (UniqueName: \"kubernetes.io/projected/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-kube-api-access-c9rpt\") on node \"crc\" DevicePath \"\"" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.558130 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.558144 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.894673 4858 generic.go:334] "Generic (PLEG): container finished" podID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerID="e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef" exitCode=0 Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.894735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smxjv" event={"ID":"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88","Type":"ContainerDied","Data":"e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef"} Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.894777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-smxjv" event={"ID":"6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88","Type":"ContainerDied","Data":"80f751e8b55f87f69e94f27a93e834bbb54d121f78fd50b7d162e582b51e8429"} Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.894804 4858 scope.go:117] "RemoveContainer" containerID="e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.894987 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-smxjv" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.919093 4858 scope.go:117] "RemoveContainer" containerID="67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996" Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.947454 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-smxjv"] Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.958989 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-smxjv"] Jan 27 21:19:37 crc kubenswrapper[4858]: I0127 21:19:37.972633 4858 scope.go:117] "RemoveContainer" containerID="51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819" Jan 27 21:19:38 crc kubenswrapper[4858]: I0127 21:19:38.040650 4858 scope.go:117] "RemoveContainer" containerID="e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef" Jan 27 21:19:38 crc kubenswrapper[4858]: E0127 21:19:38.041287 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef\": container with ID starting with e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef not found: ID does not exist" containerID="e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef" Jan 27 21:19:38 crc kubenswrapper[4858]: I0127 21:19:38.041353 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef"} err="failed to get container status \"e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef\": rpc error: code = NotFound desc = could not find container \"e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef\": container with ID starting with e3f273464b33c729f5601f880555751506cbb8da470517e5bb6de09474005eef not found: ID does not exist" Jan 27 21:19:38 crc kubenswrapper[4858]: I0127 21:19:38.041389 4858 scope.go:117] "RemoveContainer" containerID="67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996" Jan 27 21:19:38 crc kubenswrapper[4858]: E0127 21:19:38.041847 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996\": container with ID starting with 67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996 not found: ID does not exist" containerID="67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996" Jan 27 21:19:38 crc kubenswrapper[4858]: I0127 21:19:38.041900 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996"} err="failed to get container status \"67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996\": rpc error: code = NotFound desc = could not find container \"67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996\": container with ID starting with 67f4cd52caaefd1ad01851148e7dbed004dfb8da5dbe9eff7856fb4f39de6996 not found: ID does not exist" Jan 27 21:19:38 crc kubenswrapper[4858]: I0127 21:19:38.041938 4858 scope.go:117] "RemoveContainer" containerID="51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819" Jan 27 21:19:38 crc kubenswrapper[4858]: E0127 21:19:38.042788 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819\": container with ID starting with 51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819 not found: ID does not exist" containerID="51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819" Jan 27 21:19:38 crc kubenswrapper[4858]: I0127 21:19:38.042871 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819"} err="failed to get container status \"51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819\": rpc error: code = NotFound desc = could not find container \"51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819\": container with ID starting with 51d1a1347cf77a7ecb699a789ed85cd396cb7ed3620dcdaffb41deb1991bb819 not found: ID does not exist" Jan 27 21:19:38 crc kubenswrapper[4858]: I0127 21:19:38.084179 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" path="/var/lib/kubelet/pods/6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88/volumes" Jan 27 21:19:43 crc kubenswrapper[4858]: I0127 21:19:43.071513 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:19:43 crc kubenswrapper[4858]: E0127 21:19:43.072125 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:19:55 crc kubenswrapper[4858]: I0127 21:19:55.071177 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:19:55 crc kubenswrapper[4858]: E0127 21:19:55.072124 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:20:09 crc kubenswrapper[4858]: I0127 21:20:09.072406 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:20:09 crc kubenswrapper[4858]: E0127 21:20:09.073905 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:20:22 crc kubenswrapper[4858]: I0127 21:20:22.072843 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:20:22 crc kubenswrapper[4858]: E0127 21:20:22.074497 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:20:34 crc kubenswrapper[4858]: I0127 21:20:34.071171 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:20:34 crc kubenswrapper[4858]: E0127 21:20:34.072598 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:20:46 crc kubenswrapper[4858]: I0127 21:20:46.104124 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:20:46 crc kubenswrapper[4858]: E0127 21:20:46.104866 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:20:57 crc kubenswrapper[4858]: I0127 21:20:57.071765 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:20:57 crc kubenswrapper[4858]: E0127 21:20:57.072634 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:21:11 crc kubenswrapper[4858]: I0127 21:21:11.070641 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:21:11 crc kubenswrapper[4858]: E0127 21:21:11.071481 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:21:23 crc kubenswrapper[4858]: I0127 21:21:23.071748 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:21:23 crc kubenswrapper[4858]: E0127 21:21:23.072748 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:21:36 crc kubenswrapper[4858]: I0127 21:21:36.081170 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:21:36 crc kubenswrapper[4858]: E0127 21:21:36.082193 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:21:50 crc kubenswrapper[4858]: I0127 21:21:50.071252 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:21:50 crc kubenswrapper[4858]: E0127 21:21:50.072001 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:22:04 crc kubenswrapper[4858]: I0127 21:22:04.071852 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:22:04 crc kubenswrapper[4858]: E0127 21:22:04.072524 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:22:18 crc kubenswrapper[4858]: I0127 21:22:18.071831 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:22:18 crc kubenswrapper[4858]: E0127 21:22:18.072785 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:22:32 crc kubenswrapper[4858]: I0127 21:22:32.072757 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:22:32 crc kubenswrapper[4858]: E0127 21:22:32.074309 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:22:44 crc kubenswrapper[4858]: I0127 21:22:44.078049 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:22:44 crc kubenswrapper[4858]: E0127 21:22:44.079385 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:22:59 crc kubenswrapper[4858]: I0127 21:22:59.070846 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:22:59 crc kubenswrapper[4858]: E0127 21:22:59.071926 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:23:14 crc kubenswrapper[4858]: I0127 21:23:14.073458 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:23:14 crc kubenswrapper[4858]: E0127 21:23:14.074495 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:23:27 crc kubenswrapper[4858]: I0127 21:23:27.070951 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:23:27 crc kubenswrapper[4858]: E0127 21:23:27.071776 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.324755 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4r6qr"] Jan 27 21:23:29 crc kubenswrapper[4858]: E0127 21:23:29.326043 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="extract-content" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.326059 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="extract-content" Jan 27 21:23:29 crc kubenswrapper[4858]: E0127 21:23:29.326076 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="registry-server" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.326086 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="registry-server" Jan 27 21:23:29 crc kubenswrapper[4858]: E0127 21:23:29.326124 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="extract-utilities" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.326133 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="extract-utilities" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.326404 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f33ecb6-c0f1-4f49-bc80-5f4189cf7d88" containerName="registry-server" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.328368 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.339821 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r6qr"] Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.370992 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-catalog-content\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.371039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-utilities\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.371163 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4blh2\" (UniqueName: \"kubernetes.io/projected/d7908be5-35d1-49a0-8cf7-4b0102f4371c-kube-api-access-4blh2\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.473614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-catalog-content\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.473661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-utilities\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.473788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4blh2\" (UniqueName: \"kubernetes.io/projected/d7908be5-35d1-49a0-8cf7-4b0102f4371c-kube-api-access-4blh2\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.474343 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-catalog-content\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.474492 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-utilities\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.495747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4blh2\" (UniqueName: \"kubernetes.io/projected/d7908be5-35d1-49a0-8cf7-4b0102f4371c-kube-api-access-4blh2\") pod \"community-operators-4r6qr\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:29 crc kubenswrapper[4858]: I0127 21:23:29.648345 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:30 crc kubenswrapper[4858]: I0127 21:23:30.235749 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r6qr"] Jan 27 21:23:30 crc kubenswrapper[4858]: I0127 21:23:30.270384 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r6qr" event={"ID":"d7908be5-35d1-49a0-8cf7-4b0102f4371c","Type":"ContainerStarted","Data":"5d2c8da32c57b574277d89bf89596552dc0d92ce6a79fed70e6c8bc3314f583c"} Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.120218 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mrrcx"] Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.126791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.137453 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrrcx"] Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.228568 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gsv\" (UniqueName: \"kubernetes.io/projected/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-kube-api-access-d6gsv\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.228806 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-utilities\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.228843 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-catalog-content\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.291000 4858 generic.go:334] "Generic (PLEG): container finished" podID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerID="5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616" exitCode=0 Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.291093 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r6qr" event={"ID":"d7908be5-35d1-49a0-8cf7-4b0102f4371c","Type":"ContainerDied","Data":"5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616"} Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.333842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-utilities\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.333920 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-catalog-content\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.335024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-catalog-content\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.335020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-utilities\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.338707 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gsv\" (UniqueName: \"kubernetes.io/projected/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-kube-api-access-d6gsv\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.363347 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gsv\" (UniqueName: \"kubernetes.io/projected/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-kube-api-access-d6gsv\") pod \"redhat-operators-mrrcx\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.447324 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.737668 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98npt"] Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.741109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.753604 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98npt"] Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.861892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-utilities\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.862179 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879rp\" (UniqueName: \"kubernetes.io/projected/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-kube-api-access-879rp\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.862633 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-catalog-content\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.964358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-879rp\" (UniqueName: \"kubernetes.io/projected/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-kube-api-access-879rp\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.964843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-catalog-content\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.964969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-utilities\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.965569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-utilities\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.966093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-catalog-content\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.970959 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrrcx"] Jan 27 21:23:31 crc kubenswrapper[4858]: I0127 21:23:31.996898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-879rp\" (UniqueName: \"kubernetes.io/projected/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-kube-api-access-879rp\") pod \"redhat-marketplace-98npt\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:32 crc kubenswrapper[4858]: I0127 21:23:32.084446 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:32 crc kubenswrapper[4858]: I0127 21:23:32.363542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrrcx" event={"ID":"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9","Type":"ContainerStarted","Data":"dfdfda74e8acf1517d5aac9e5277bd46cd7481f0ed252f198673a075603fb521"} Jan 27 21:23:32 crc kubenswrapper[4858]: I0127 21:23:32.567404 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98npt"] Jan 27 21:23:33 crc kubenswrapper[4858]: I0127 21:23:33.376091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r6qr" event={"ID":"d7908be5-35d1-49a0-8cf7-4b0102f4371c","Type":"ContainerStarted","Data":"c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37"} Jan 27 21:23:33 crc kubenswrapper[4858]: I0127 21:23:33.390818 4858 generic.go:334] "Generic (PLEG): container finished" podID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerID="c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2" exitCode=0 Jan 27 21:23:33 crc kubenswrapper[4858]: I0127 21:23:33.390906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrrcx" event={"ID":"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9","Type":"ContainerDied","Data":"c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2"} Jan 27 21:23:33 crc kubenswrapper[4858]: I0127 21:23:33.393975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98npt" event={"ID":"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1","Type":"ContainerStarted","Data":"de081280b3d87304e692e7a29d5c188acd82b9fe501c7f1781a390c4445caaaa"} Jan 27 21:23:34 crc kubenswrapper[4858]: I0127 21:23:34.411352 4858 generic.go:334] "Generic (PLEG): container finished" podID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerID="0b95428a4f5b951b8896a4fe80df7170abd4a0997c5c4bcff2f9a1b2dfaa3254" exitCode=0 Jan 27 21:23:34 crc kubenswrapper[4858]: I0127 21:23:34.411463 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98npt" event={"ID":"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1","Type":"ContainerDied","Data":"0b95428a4f5b951b8896a4fe80df7170abd4a0997c5c4bcff2f9a1b2dfaa3254"} Jan 27 21:23:34 crc kubenswrapper[4858]: I0127 21:23:34.416499 4858 generic.go:334] "Generic (PLEG): container finished" podID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerID="c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37" exitCode=0 Jan 27 21:23:34 crc kubenswrapper[4858]: I0127 21:23:34.416604 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r6qr" event={"ID":"d7908be5-35d1-49a0-8cf7-4b0102f4371c","Type":"ContainerDied","Data":"c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37"} Jan 27 21:23:35 crc kubenswrapper[4858]: I0127 21:23:35.431936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r6qr" event={"ID":"d7908be5-35d1-49a0-8cf7-4b0102f4371c","Type":"ContainerStarted","Data":"fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a"} Jan 27 21:23:35 crc kubenswrapper[4858]: I0127 21:23:35.434539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrrcx" event={"ID":"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9","Type":"ContainerStarted","Data":"7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e"} Jan 27 21:23:35 crc kubenswrapper[4858]: I0127 21:23:35.437691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98npt" event={"ID":"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1","Type":"ContainerStarted","Data":"537ec54794091f1a40dac0af8f6ad5cc14dc10ef1928a672dd094440a6073c13"} Jan 27 21:23:35 crc kubenswrapper[4858]: I0127 21:23:35.462726 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4r6qr" podStartSLOduration=2.883722959 podStartE2EDuration="6.462701656s" podCreationTimestamp="2026-01-27 21:23:29 +0000 UTC" firstStartedPulling="2026-01-27 21:23:31.295667493 +0000 UTC m=+4556.003483199" lastFinishedPulling="2026-01-27 21:23:34.87464619 +0000 UTC m=+4559.582461896" observedRunningTime="2026-01-27 21:23:35.455061521 +0000 UTC m=+4560.162877237" watchObservedRunningTime="2026-01-27 21:23:35.462701656 +0000 UTC m=+4560.170517362" Jan 27 21:23:37 crc kubenswrapper[4858]: I0127 21:23:37.466207 4858 generic.go:334] "Generic (PLEG): container finished" podID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerID="537ec54794091f1a40dac0af8f6ad5cc14dc10ef1928a672dd094440a6073c13" exitCode=0 Jan 27 21:23:37 crc kubenswrapper[4858]: I0127 21:23:37.466314 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98npt" event={"ID":"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1","Type":"ContainerDied","Data":"537ec54794091f1a40dac0af8f6ad5cc14dc10ef1928a672dd094440a6073c13"} Jan 27 21:23:39 crc kubenswrapper[4858]: I0127 21:23:39.491959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98npt" event={"ID":"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1","Type":"ContainerStarted","Data":"ac3834c51c93bb0c7d55a832e1c6cbb9aaa60739b7c7c951dcc118868f29bed7"} Jan 27 21:23:39 crc kubenswrapper[4858]: I0127 21:23:39.526996 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98npt" podStartSLOduration=4.57602145 podStartE2EDuration="8.526975463s" podCreationTimestamp="2026-01-27 21:23:31 +0000 UTC" firstStartedPulling="2026-01-27 21:23:34.41382705 +0000 UTC m=+4559.121642756" lastFinishedPulling="2026-01-27 21:23:38.364781063 +0000 UTC m=+4563.072596769" observedRunningTime="2026-01-27 21:23:39.517729593 +0000 UTC m=+4564.225545319" watchObservedRunningTime="2026-01-27 21:23:39.526975463 +0000 UTC m=+4564.234791169" Jan 27 21:23:39 crc kubenswrapper[4858]: I0127 21:23:39.648724 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:39 crc kubenswrapper[4858]: I0127 21:23:39.649068 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:39 crc kubenswrapper[4858]: I0127 21:23:39.704641 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:40 crc kubenswrapper[4858]: I0127 21:23:40.504252 4858 generic.go:334] "Generic (PLEG): container finished" podID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerID="7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e" exitCode=0 Jan 27 21:23:40 crc kubenswrapper[4858]: I0127 21:23:40.504366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrrcx" event={"ID":"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9","Type":"ContainerDied","Data":"7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e"} Jan 27 21:23:40 crc kubenswrapper[4858]: I0127 21:23:40.562006 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:41 crc kubenswrapper[4858]: I0127 21:23:41.517723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrrcx" event={"ID":"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9","Type":"ContainerStarted","Data":"de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458"} Jan 27 21:23:41 crc kubenswrapper[4858]: I0127 21:23:41.541776 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mrrcx" podStartSLOduration=3.003253327 podStartE2EDuration="10.541754828s" podCreationTimestamp="2026-01-27 21:23:31 +0000 UTC" firstStartedPulling="2026-01-27 21:23:33.392996264 +0000 UTC m=+4558.100811970" lastFinishedPulling="2026-01-27 21:23:40.931497765 +0000 UTC m=+4565.639313471" observedRunningTime="2026-01-27 21:23:41.538437955 +0000 UTC m=+4566.246253661" watchObservedRunningTime="2026-01-27 21:23:41.541754828 +0000 UTC m=+4566.249570524" Jan 27 21:23:42 crc kubenswrapper[4858]: I0127 21:23:42.071632 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:23:42 crc kubenswrapper[4858]: E0127 21:23:42.072310 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:23:42 crc kubenswrapper[4858]: I0127 21:23:42.094076 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:42 crc kubenswrapper[4858]: I0127 21:23:42.094117 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:42 crc kubenswrapper[4858]: I0127 21:23:42.118849 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r6qr"] Jan 27 21:23:43 crc kubenswrapper[4858]: I0127 21:23:43.142290 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-98npt" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="registry-server" probeResult="failure" output=< Jan 27 21:23:43 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:23:43 crc kubenswrapper[4858]: > Jan 27 21:23:43 crc kubenswrapper[4858]: I0127 21:23:43.533974 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4r6qr" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="registry-server" containerID="cri-o://fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a" gracePeriod=2 Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.215769 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.275982 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-catalog-content\") pod \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.276151 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-utilities\") pod \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.276329 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4blh2\" (UniqueName: \"kubernetes.io/projected/d7908be5-35d1-49a0-8cf7-4b0102f4371c-kube-api-access-4blh2\") pod \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\" (UID: \"d7908be5-35d1-49a0-8cf7-4b0102f4371c\") " Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.276964 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-utilities" (OuterVolumeSpecName: "utilities") pod "d7908be5-35d1-49a0-8cf7-4b0102f4371c" (UID: "d7908be5-35d1-49a0-8cf7-4b0102f4371c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.284035 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7908be5-35d1-49a0-8cf7-4b0102f4371c-kube-api-access-4blh2" (OuterVolumeSpecName: "kube-api-access-4blh2") pod "d7908be5-35d1-49a0-8cf7-4b0102f4371c" (UID: "d7908be5-35d1-49a0-8cf7-4b0102f4371c"). InnerVolumeSpecName "kube-api-access-4blh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.331255 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7908be5-35d1-49a0-8cf7-4b0102f4371c" (UID: "d7908be5-35d1-49a0-8cf7-4b0102f4371c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.380191 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.380238 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7908be5-35d1-49a0-8cf7-4b0102f4371c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.380253 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4blh2\" (UniqueName: \"kubernetes.io/projected/d7908be5-35d1-49a0-8cf7-4b0102f4371c-kube-api-access-4blh2\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.552494 4858 generic.go:334] "Generic (PLEG): container finished" podID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerID="fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a" exitCode=0 Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.552624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r6qr" event={"ID":"d7908be5-35d1-49a0-8cf7-4b0102f4371c","Type":"ContainerDied","Data":"fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a"} Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.552674 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r6qr" event={"ID":"d7908be5-35d1-49a0-8cf7-4b0102f4371c","Type":"ContainerDied","Data":"5d2c8da32c57b574277d89bf89596552dc0d92ce6a79fed70e6c8bc3314f583c"} Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.552710 4858 scope.go:117] "RemoveContainer" containerID="fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.552948 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r6qr" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.600883 4858 scope.go:117] "RemoveContainer" containerID="c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.601949 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r6qr"] Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.614798 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4r6qr"] Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.635777 4858 scope.go:117] "RemoveContainer" containerID="5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.690330 4858 scope.go:117] "RemoveContainer" containerID="fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a" Jan 27 21:23:44 crc kubenswrapper[4858]: E0127 21:23:44.690943 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a\": container with ID starting with fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a not found: ID does not exist" containerID="fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.691003 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a"} err="failed to get container status \"fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a\": rpc error: code = NotFound desc = could not find container \"fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a\": container with ID starting with fd5a6e46c34977646c6b3b8dd519c258237045abfca33acebd4bee79f6202e2a not found: ID does not exist" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.691039 4858 scope.go:117] "RemoveContainer" containerID="c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37" Jan 27 21:23:44 crc kubenswrapper[4858]: E0127 21:23:44.691425 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37\": container with ID starting with c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37 not found: ID does not exist" containerID="c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.691472 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37"} err="failed to get container status \"c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37\": rpc error: code = NotFound desc = could not find container \"c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37\": container with ID starting with c6f7db9a26d96d3e1706549da6c270754d3632dea144797c96264b230d846d37 not found: ID does not exist" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.691501 4858 scope.go:117] "RemoveContainer" containerID="5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616" Jan 27 21:23:44 crc kubenswrapper[4858]: E0127 21:23:44.691910 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616\": container with ID starting with 5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616 not found: ID does not exist" containerID="5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616" Jan 27 21:23:44 crc kubenswrapper[4858]: I0127 21:23:44.691952 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616"} err="failed to get container status \"5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616\": rpc error: code = NotFound desc = could not find container \"5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616\": container with ID starting with 5454778bc70e59943629203c18b286eeb5c8163c63c713b1a971798e9ef87616 not found: ID does not exist" Jan 27 21:23:46 crc kubenswrapper[4858]: I0127 21:23:46.083406 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" path="/var/lib/kubelet/pods/d7908be5-35d1-49a0-8cf7-4b0102f4371c/volumes" Jan 27 21:23:51 crc kubenswrapper[4858]: I0127 21:23:51.447699 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:51 crc kubenswrapper[4858]: I0127 21:23:51.448241 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:51 crc kubenswrapper[4858]: I0127 21:23:51.500518 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:51 crc kubenswrapper[4858]: I0127 21:23:51.686356 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:51 crc kubenswrapper[4858]: I0127 21:23:51.748079 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrrcx"] Jan 27 21:23:52 crc kubenswrapper[4858]: I0127 21:23:52.136016 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:52 crc kubenswrapper[4858]: I0127 21:23:52.195472 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:53 crc kubenswrapper[4858]: I0127 21:23:53.644475 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mrrcx" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="registry-server" containerID="cri-o://de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458" gracePeriod=2 Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.075157 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:23:54 crc kubenswrapper[4858]: E0127 21:23:54.076890 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.127721 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.148602 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98npt"] Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.148876 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98npt" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="registry-server" containerID="cri-o://ac3834c51c93bb0c7d55a832e1c6cbb9aaa60739b7c7c951dcc118868f29bed7" gracePeriod=2 Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.212607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-utilities\") pod \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.213494 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6gsv\" (UniqueName: \"kubernetes.io/projected/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-kube-api-access-d6gsv\") pod \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.213617 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-catalog-content\") pod \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\" (UID: \"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9\") " Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.213616 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-utilities" (OuterVolumeSpecName: "utilities") pod "727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" (UID: "727e2878-bcc8-45d3-a6cf-8dad7bfa04a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.214820 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.221324 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-kube-api-access-d6gsv" (OuterVolumeSpecName: "kube-api-access-d6gsv") pod "727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" (UID: "727e2878-bcc8-45d3-a6cf-8dad7bfa04a9"). InnerVolumeSpecName "kube-api-access-d6gsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.317215 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6gsv\" (UniqueName: \"kubernetes.io/projected/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-kube-api-access-d6gsv\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.440935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" (UID: "727e2878-bcc8-45d3-a6cf-8dad7bfa04a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.523746 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.674090 4858 generic.go:334] "Generic (PLEG): container finished" podID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerID="de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458" exitCode=0 Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.674137 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrrcx" event={"ID":"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9","Type":"ContainerDied","Data":"de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458"} Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.674172 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrrcx" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.674190 4858 scope.go:117] "RemoveContainer" containerID="de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.674176 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrrcx" event={"ID":"727e2878-bcc8-45d3-a6cf-8dad7bfa04a9","Type":"ContainerDied","Data":"dfdfda74e8acf1517d5aac9e5277bd46cd7481f0ed252f198673a075603fb521"} Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.679050 4858 generic.go:334] "Generic (PLEG): container finished" podID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerID="ac3834c51c93bb0c7d55a832e1c6cbb9aaa60739b7c7c951dcc118868f29bed7" exitCode=0 Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.679249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98npt" event={"ID":"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1","Type":"ContainerDied","Data":"ac3834c51c93bb0c7d55a832e1c6cbb9aaa60739b7c7c951dcc118868f29bed7"} Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.712532 4858 scope.go:117] "RemoveContainer" containerID="7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.714018 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrrcx"] Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.724246 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mrrcx"] Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.731045 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.748493 4858 scope.go:117] "RemoveContainer" containerID="c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.779871 4858 scope.go:117] "RemoveContainer" containerID="de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458" Jan 27 21:23:54 crc kubenswrapper[4858]: E0127 21:23:54.780941 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458\": container with ID starting with de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458 not found: ID does not exist" containerID="de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.780970 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458"} err="failed to get container status \"de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458\": rpc error: code = NotFound desc = could not find container \"de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458\": container with ID starting with de2f482d43388b17b9af7851be960e247f82d19a679e9cb24489f52617b5a458 not found: ID does not exist" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.780990 4858 scope.go:117] "RemoveContainer" containerID="7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e" Jan 27 21:23:54 crc kubenswrapper[4858]: E0127 21:23:54.781198 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e\": container with ID starting with 7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e not found: ID does not exist" containerID="7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.781218 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e"} err="failed to get container status \"7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e\": rpc error: code = NotFound desc = could not find container \"7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e\": container with ID starting with 7f6eeab793244ec4ad931c4ec452bcb68e87aa74b366c7116cf5798896d18e7e not found: ID does not exist" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.781233 4858 scope.go:117] "RemoveContainer" containerID="c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2" Jan 27 21:23:54 crc kubenswrapper[4858]: E0127 21:23:54.781864 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2\": container with ID starting with c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2 not found: ID does not exist" containerID="c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.781896 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2"} err="failed to get container status \"c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2\": rpc error: code = NotFound desc = could not find container \"c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2\": container with ID starting with c8c490ba660c636532d2d87c250304759e8f5478c7e1653df1c5140c56efa5b2 not found: ID does not exist" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.829885 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-catalog-content\") pod \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.829966 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-879rp\" (UniqueName: \"kubernetes.io/projected/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-kube-api-access-879rp\") pod \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.830007 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-utilities\") pod \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\" (UID: \"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1\") " Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.836935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-utilities" (OuterVolumeSpecName: "utilities") pod "ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" (UID: "ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.852514 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-kube-api-access-879rp" (OuterVolumeSpecName: "kube-api-access-879rp") pod "ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" (UID: "ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1"). InnerVolumeSpecName "kube-api-access-879rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.870095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" (UID: "ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.938947 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.938987 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-879rp\" (UniqueName: \"kubernetes.io/projected/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-kube-api-access-879rp\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:54 crc kubenswrapper[4858]: I0127 21:23:54.938999 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:23:55 crc kubenswrapper[4858]: I0127 21:23:55.690008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98npt" event={"ID":"ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1","Type":"ContainerDied","Data":"de081280b3d87304e692e7a29d5c188acd82b9fe501c7f1781a390c4445caaaa"} Jan 27 21:23:55 crc kubenswrapper[4858]: I0127 21:23:55.691378 4858 scope.go:117] "RemoveContainer" containerID="ac3834c51c93bb0c7d55a832e1c6cbb9aaa60739b7c7c951dcc118868f29bed7" Jan 27 21:23:55 crc kubenswrapper[4858]: I0127 21:23:55.690034 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98npt" Jan 27 21:23:55 crc kubenswrapper[4858]: I0127 21:23:55.718125 4858 scope.go:117] "RemoveContainer" containerID="537ec54794091f1a40dac0af8f6ad5cc14dc10ef1928a672dd094440a6073c13" Jan 27 21:23:55 crc kubenswrapper[4858]: I0127 21:23:55.736757 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98npt"] Jan 27 21:23:55 crc kubenswrapper[4858]: I0127 21:23:55.745665 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98npt"] Jan 27 21:23:55 crc kubenswrapper[4858]: I0127 21:23:55.750496 4858 scope.go:117] "RemoveContainer" containerID="0b95428a4f5b951b8896a4fe80df7170abd4a0997c5c4bcff2f9a1b2dfaa3254" Jan 27 21:23:56 crc kubenswrapper[4858]: I0127 21:23:56.084457 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" path="/var/lib/kubelet/pods/727e2878-bcc8-45d3-a6cf-8dad7bfa04a9/volumes" Jan 27 21:23:56 crc kubenswrapper[4858]: I0127 21:23:56.085920 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" path="/var/lib/kubelet/pods/ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1/volumes" Jan 27 21:24:08 crc kubenswrapper[4858]: I0127 21:24:08.074301 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:24:08 crc kubenswrapper[4858]: E0127 21:24:08.075071 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:24:20 crc kubenswrapper[4858]: I0127 21:24:20.070828 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:24:20 crc kubenswrapper[4858]: E0127 21:24:20.071441 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:24:31 crc kubenswrapper[4858]: I0127 21:24:31.071195 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:24:32 crc kubenswrapper[4858]: I0127 21:24:32.018633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"79cbeeb5ef365701339444fc3e244f7bf6ac3ba14d849c3defe1779ab3765e81"} Jan 27 21:26:59 crc kubenswrapper[4858]: I0127 21:26:59.328502 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:26:59 crc kubenswrapper[4858]: I0127 21:26:59.329071 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:27:29 crc kubenswrapper[4858]: I0127 21:27:29.329327 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:27:29 crc kubenswrapper[4858]: I0127 21:27:29.329793 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:27:59 crc kubenswrapper[4858]: I0127 21:27:59.329117 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:27:59 crc kubenswrapper[4858]: I0127 21:27:59.329777 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:27:59 crc kubenswrapper[4858]: I0127 21:27:59.329839 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:27:59 crc kubenswrapper[4858]: I0127 21:27:59.330803 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79cbeeb5ef365701339444fc3e244f7bf6ac3ba14d849c3defe1779ab3765e81"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:27:59 crc kubenswrapper[4858]: I0127 21:27:59.330878 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://79cbeeb5ef365701339444fc3e244f7bf6ac3ba14d849c3defe1779ab3765e81" gracePeriod=600 Jan 27 21:28:00 crc kubenswrapper[4858]: I0127 21:28:00.084434 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="79cbeeb5ef365701339444fc3e244f7bf6ac3ba14d849c3defe1779ab3765e81" exitCode=0 Jan 27 21:28:00 crc kubenswrapper[4858]: I0127 21:28:00.084506 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"79cbeeb5ef365701339444fc3e244f7bf6ac3ba14d849c3defe1779ab3765e81"} Jan 27 21:28:00 crc kubenswrapper[4858]: I0127 21:28:00.084810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589"} Jan 27 21:28:00 crc kubenswrapper[4858]: I0127 21:28:00.084835 4858 scope.go:117] "RemoveContainer" containerID="a816bc7b4531580dc5297b3241c2f3e669d1737c15968c753408b365086ab30c" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.094339 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mb9f9"] Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.100940 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.100967 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.100986 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="extract-utilities" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.100995 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="extract-utilities" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.101011 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="extract-content" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101019 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="extract-content" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.101028 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="extract-content" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101037 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="extract-content" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.101067 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="extract-utilities" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101077 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="extract-utilities" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.101094 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="extract-content" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101100 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="extract-content" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.101112 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101120 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.101132 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101140 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: E0127 21:29:39.101163 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="extract-utilities" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101169 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="extract-utilities" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101358 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffeb50cd-8f19-4c8f-a69f-0fc9ddb77ef1" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101371 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="727e2878-bcc8-45d3-a6cf-8dad7bfa04a9" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.101384 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7908be5-35d1-49a0-8cf7-4b0102f4371c" containerName="registry-server" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.103060 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.111275 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb9f9"] Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.249028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntpl\" (UniqueName: \"kubernetes.io/projected/1532963d-4da7-4cae-8bcd-41d76bb05683-kube-api-access-lntpl\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.249164 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-utilities\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.249322 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-catalog-content\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.351410 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-utilities\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.351987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-catalog-content\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.352123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-utilities\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.352136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lntpl\" (UniqueName: \"kubernetes.io/projected/1532963d-4da7-4cae-8bcd-41d76bb05683-kube-api-access-lntpl\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.352453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-catalog-content\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.379775 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lntpl\" (UniqueName: \"kubernetes.io/projected/1532963d-4da7-4cae-8bcd-41d76bb05683-kube-api-access-lntpl\") pod \"certified-operators-mb9f9\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:39 crc kubenswrapper[4858]: I0127 21:29:39.448600 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:40 crc kubenswrapper[4858]: W0127 21:29:40.016728 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1532963d_4da7_4cae_8bcd_41d76bb05683.slice/crio-7c2d22fbdceee6ee2026a22f3116a178a238987afacf6d83f65ed34c2b40474f WatchSource:0}: Error finding container 7c2d22fbdceee6ee2026a22f3116a178a238987afacf6d83f65ed34c2b40474f: Status 404 returned error can't find the container with id 7c2d22fbdceee6ee2026a22f3116a178a238987afacf6d83f65ed34c2b40474f Jan 27 21:29:40 crc kubenswrapper[4858]: I0127 21:29:40.025194 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb9f9"] Jan 27 21:29:40 crc kubenswrapper[4858]: I0127 21:29:40.082239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb9f9" event={"ID":"1532963d-4da7-4cae-8bcd-41d76bb05683","Type":"ContainerStarted","Data":"7c2d22fbdceee6ee2026a22f3116a178a238987afacf6d83f65ed34c2b40474f"} Jan 27 21:29:41 crc kubenswrapper[4858]: I0127 21:29:41.085897 4858 generic.go:334] "Generic (PLEG): container finished" podID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerID="b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21" exitCode=0 Jan 27 21:29:41 crc kubenswrapper[4858]: I0127 21:29:41.085987 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb9f9" event={"ID":"1532963d-4da7-4cae-8bcd-41d76bb05683","Type":"ContainerDied","Data":"b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21"} Jan 27 21:29:41 crc kubenswrapper[4858]: I0127 21:29:41.089115 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:29:42 crc kubenswrapper[4858]: I0127 21:29:42.096400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb9f9" event={"ID":"1532963d-4da7-4cae-8bcd-41d76bb05683","Type":"ContainerStarted","Data":"3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb"} Jan 27 21:29:44 crc kubenswrapper[4858]: I0127 21:29:44.119592 4858 generic.go:334] "Generic (PLEG): container finished" podID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerID="3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb" exitCode=0 Jan 27 21:29:44 crc kubenswrapper[4858]: I0127 21:29:44.119647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb9f9" event={"ID":"1532963d-4da7-4cae-8bcd-41d76bb05683","Type":"ContainerDied","Data":"3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb"} Jan 27 21:29:45 crc kubenswrapper[4858]: I0127 21:29:45.139664 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb9f9" event={"ID":"1532963d-4da7-4cae-8bcd-41d76bb05683","Type":"ContainerStarted","Data":"ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0"} Jan 27 21:29:45 crc kubenswrapper[4858]: I0127 21:29:45.171365 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mb9f9" podStartSLOduration=2.6193580990000003 podStartE2EDuration="6.171338835s" podCreationTimestamp="2026-01-27 21:29:39 +0000 UTC" firstStartedPulling="2026-01-27 21:29:41.088778252 +0000 UTC m=+4925.796593968" lastFinishedPulling="2026-01-27 21:29:44.640758988 +0000 UTC m=+4929.348574704" observedRunningTime="2026-01-27 21:29:45.160083817 +0000 UTC m=+4929.867899533" watchObservedRunningTime="2026-01-27 21:29:45.171338835 +0000 UTC m=+4929.879154541" Jan 27 21:29:49 crc kubenswrapper[4858]: I0127 21:29:49.450392 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:49 crc kubenswrapper[4858]: I0127 21:29:49.450988 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:49 crc kubenswrapper[4858]: I0127 21:29:49.509659 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:50 crc kubenswrapper[4858]: I0127 21:29:50.240564 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:50 crc kubenswrapper[4858]: I0127 21:29:50.302054 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb9f9"] Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.211700 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mb9f9" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="registry-server" containerID="cri-o://ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0" gracePeriod=2 Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.689695 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.773827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-catalog-content\") pod \"1532963d-4da7-4cae-8bcd-41d76bb05683\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.774041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-utilities\") pod \"1532963d-4da7-4cae-8bcd-41d76bb05683\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.774165 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lntpl\" (UniqueName: \"kubernetes.io/projected/1532963d-4da7-4cae-8bcd-41d76bb05683-kube-api-access-lntpl\") pod \"1532963d-4da7-4cae-8bcd-41d76bb05683\" (UID: \"1532963d-4da7-4cae-8bcd-41d76bb05683\") " Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.774930 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-utilities" (OuterVolumeSpecName: "utilities") pod "1532963d-4da7-4cae-8bcd-41d76bb05683" (UID: "1532963d-4da7-4cae-8bcd-41d76bb05683"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.779789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1532963d-4da7-4cae-8bcd-41d76bb05683-kube-api-access-lntpl" (OuterVolumeSpecName: "kube-api-access-lntpl") pod "1532963d-4da7-4cae-8bcd-41d76bb05683" (UID: "1532963d-4da7-4cae-8bcd-41d76bb05683"). InnerVolumeSpecName "kube-api-access-lntpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.877432 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:29:52 crc kubenswrapper[4858]: I0127 21:29:52.877477 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lntpl\" (UniqueName: \"kubernetes.io/projected/1532963d-4da7-4cae-8bcd-41d76bb05683-kube-api-access-lntpl\") on node \"crc\" DevicePath \"\"" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.221407 4858 generic.go:334] "Generic (PLEG): container finished" podID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerID="ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0" exitCode=0 Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.221456 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb9f9" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.221472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb9f9" event={"ID":"1532963d-4da7-4cae-8bcd-41d76bb05683","Type":"ContainerDied","Data":"ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0"} Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.221852 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb9f9" event={"ID":"1532963d-4da7-4cae-8bcd-41d76bb05683","Type":"ContainerDied","Data":"7c2d22fbdceee6ee2026a22f3116a178a238987afacf6d83f65ed34c2b40474f"} Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.221875 4858 scope.go:117] "RemoveContainer" containerID="ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.238870 4858 scope.go:117] "RemoveContainer" containerID="3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.262602 4858 scope.go:117] "RemoveContainer" containerID="b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.309666 4858 scope.go:117] "RemoveContainer" containerID="ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0" Jan 27 21:29:53 crc kubenswrapper[4858]: E0127 21:29:53.310049 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0\": container with ID starting with ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0 not found: ID does not exist" containerID="ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.310093 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0"} err="failed to get container status \"ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0\": rpc error: code = NotFound desc = could not find container \"ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0\": container with ID starting with ad4af4a4f5fdcb01e47e5f50a49c89bd277b7d5aa418a63696c1958430ee57d0 not found: ID does not exist" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.310120 4858 scope.go:117] "RemoveContainer" containerID="3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb" Jan 27 21:29:53 crc kubenswrapper[4858]: E0127 21:29:53.310362 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb\": container with ID starting with 3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb not found: ID does not exist" containerID="3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.310379 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb"} err="failed to get container status \"3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb\": rpc error: code = NotFound desc = could not find container \"3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb\": container with ID starting with 3dcf0308219ba039a1772dca6697754a06970d2c0c5727675a2b65d4076d7fcb not found: ID does not exist" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.310391 4858 scope.go:117] "RemoveContainer" containerID="b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21" Jan 27 21:29:53 crc kubenswrapper[4858]: E0127 21:29:53.310583 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21\": container with ID starting with b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21 not found: ID does not exist" containerID="b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.310604 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21"} err="failed to get container status \"b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21\": rpc error: code = NotFound desc = could not find container \"b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21\": container with ID starting with b0bbc28315ba231d5b6cd90cb9d938f8e113cea6d913e963cf2d88fa67c7ec21 not found: ID does not exist" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.517979 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1532963d-4da7-4cae-8bcd-41d76bb05683" (UID: "1532963d-4da7-4cae-8bcd-41d76bb05683"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.564679 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb9f9"] Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.575217 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mb9f9"] Jan 27 21:29:53 crc kubenswrapper[4858]: I0127 21:29:53.594932 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1532963d-4da7-4cae-8bcd-41d76bb05683-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:29:54 crc kubenswrapper[4858]: I0127 21:29:54.088530 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" path="/var/lib/kubelet/pods/1532963d-4da7-4cae-8bcd-41d76bb05683/volumes" Jan 27 21:29:59 crc kubenswrapper[4858]: I0127 21:29:59.328967 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:29:59 crc kubenswrapper[4858]: I0127 21:29:59.329447 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.155160 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7"] Jan 27 21:30:00 crc kubenswrapper[4858]: E0127 21:30:00.156072 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.156095 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4858]: E0127 21:30:00.156136 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="extract-content" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.156143 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="extract-content" Jan 27 21:30:00 crc kubenswrapper[4858]: E0127 21:30:00.156160 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="extract-utilities" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.156167 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="extract-utilities" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.156376 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1532963d-4da7-4cae-8bcd-41d76bb05683" containerName="registry-server" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.159262 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.166047 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7"] Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.212921 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.212947 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.362974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcq8\" (UniqueName: \"kubernetes.io/projected/bff78a17-d94b-4b1c-b787-6dc4f415a209-kube-api-access-mlcq8\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.363180 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff78a17-d94b-4b1c-b787-6dc4f415a209-config-volume\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.363517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff78a17-d94b-4b1c-b787-6dc4f415a209-secret-volume\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.465813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff78a17-d94b-4b1c-b787-6dc4f415a209-config-volume\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.466376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff78a17-d94b-4b1c-b787-6dc4f415a209-secret-volume\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.466666 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcq8\" (UniqueName: \"kubernetes.io/projected/bff78a17-d94b-4b1c-b787-6dc4f415a209-kube-api-access-mlcq8\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.466853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff78a17-d94b-4b1c-b787-6dc4f415a209-config-volume\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.474313 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff78a17-d94b-4b1c-b787-6dc4f415a209-secret-volume\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.494826 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcq8\" (UniqueName: \"kubernetes.io/projected/bff78a17-d94b-4b1c-b787-6dc4f415a209-kube-api-access-mlcq8\") pod \"collect-profiles-29492490-5smd7\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:00 crc kubenswrapper[4858]: I0127 21:30:00.557296 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:01 crc kubenswrapper[4858]: I0127 21:30:01.016582 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7"] Jan 27 21:30:01 crc kubenswrapper[4858]: I0127 21:30:01.306966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" event={"ID":"bff78a17-d94b-4b1c-b787-6dc4f415a209","Type":"ContainerStarted","Data":"bd8e6e7e03cde4bc280ff0c7c1085ce4dcb9ac038fcdeb332b17e0927ea80088"} Jan 27 21:30:01 crc kubenswrapper[4858]: I0127 21:30:01.307013 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" event={"ID":"bff78a17-d94b-4b1c-b787-6dc4f415a209","Type":"ContainerStarted","Data":"05e8b8899c6413b9687536cb370633c76c99c9d399931510c8aca45a0c08f6ea"} Jan 27 21:30:01 crc kubenswrapper[4858]: I0127 21:30:01.328279 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" podStartSLOduration=1.328257449 podStartE2EDuration="1.328257449s" podCreationTimestamp="2026-01-27 21:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:30:01.323165596 +0000 UTC m=+4946.030981302" watchObservedRunningTime="2026-01-27 21:30:01.328257449 +0000 UTC m=+4946.036073155" Jan 27 21:30:02 crc kubenswrapper[4858]: I0127 21:30:02.317219 4858 generic.go:334] "Generic (PLEG): container finished" podID="bff78a17-d94b-4b1c-b787-6dc4f415a209" containerID="bd8e6e7e03cde4bc280ff0c7c1085ce4dcb9ac038fcdeb332b17e0927ea80088" exitCode=0 Jan 27 21:30:02 crc kubenswrapper[4858]: I0127 21:30:02.317328 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" event={"ID":"bff78a17-d94b-4b1c-b787-6dc4f415a209","Type":"ContainerDied","Data":"bd8e6e7e03cde4bc280ff0c7c1085ce4dcb9ac038fcdeb332b17e0927ea80088"} Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.683612 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.842615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlcq8\" (UniqueName: \"kubernetes.io/projected/bff78a17-d94b-4b1c-b787-6dc4f415a209-kube-api-access-mlcq8\") pod \"bff78a17-d94b-4b1c-b787-6dc4f415a209\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.842844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff78a17-d94b-4b1c-b787-6dc4f415a209-config-volume\") pod \"bff78a17-d94b-4b1c-b787-6dc4f415a209\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.842873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff78a17-d94b-4b1c-b787-6dc4f415a209-secret-volume\") pod \"bff78a17-d94b-4b1c-b787-6dc4f415a209\" (UID: \"bff78a17-d94b-4b1c-b787-6dc4f415a209\") " Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.843296 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff78a17-d94b-4b1c-b787-6dc4f415a209-config-volume" (OuterVolumeSpecName: "config-volume") pod "bff78a17-d94b-4b1c-b787-6dc4f415a209" (UID: "bff78a17-d94b-4b1c-b787-6dc4f415a209"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.843491 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff78a17-d94b-4b1c-b787-6dc4f415a209-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.848139 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff78a17-d94b-4b1c-b787-6dc4f415a209-kube-api-access-mlcq8" (OuterVolumeSpecName: "kube-api-access-mlcq8") pod "bff78a17-d94b-4b1c-b787-6dc4f415a209" (UID: "bff78a17-d94b-4b1c-b787-6dc4f415a209"). InnerVolumeSpecName "kube-api-access-mlcq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.848179 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff78a17-d94b-4b1c-b787-6dc4f415a209-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bff78a17-d94b-4b1c-b787-6dc4f415a209" (UID: "bff78a17-d94b-4b1c-b787-6dc4f415a209"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.945941 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bff78a17-d94b-4b1c-b787-6dc4f415a209-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:03 crc kubenswrapper[4858]: I0127 21:30:03.946384 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlcq8\" (UniqueName: \"kubernetes.io/projected/bff78a17-d94b-4b1c-b787-6dc4f415a209-kube-api-access-mlcq8\") on node \"crc\" DevicePath \"\"" Jan 27 21:30:04 crc kubenswrapper[4858]: I0127 21:30:04.339074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" event={"ID":"bff78a17-d94b-4b1c-b787-6dc4f415a209","Type":"ContainerDied","Data":"05e8b8899c6413b9687536cb370633c76c99c9d399931510c8aca45a0c08f6ea"} Jan 27 21:30:04 crc kubenswrapper[4858]: I0127 21:30:04.339124 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e8b8899c6413b9687536cb370633c76c99c9d399931510c8aca45a0c08f6ea" Jan 27 21:30:04 crc kubenswrapper[4858]: I0127 21:30:04.339128 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492490-5smd7" Jan 27 21:30:04 crc kubenswrapper[4858]: I0127 21:30:04.406640 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d"] Jan 27 21:30:04 crc kubenswrapper[4858]: I0127 21:30:04.415219 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492445-8m49d"] Jan 27 21:30:06 crc kubenswrapper[4858]: I0127 21:30:06.092115 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab60935-49aa-431d-80f1-59101d72d598" path="/var/lib/kubelet/pods/5ab60935-49aa-431d-80f1-59101d72d598/volumes" Jan 27 21:30:29 crc kubenswrapper[4858]: I0127 21:30:29.329155 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:30:29 crc kubenswrapper[4858]: I0127 21:30:29.329968 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.328785 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.329437 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.329492 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.330350 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.330410 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" gracePeriod=600 Jan 27 21:30:59 crc kubenswrapper[4858]: E0127 21:30:59.453431 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.906434 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" exitCode=0 Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.906598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589"} Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.906800 4858 scope.go:117] "RemoveContainer" containerID="79cbeeb5ef365701339444fc3e244f7bf6ac3ba14d849c3defe1779ab3765e81" Jan 27 21:30:59 crc kubenswrapper[4858]: I0127 21:30:59.907461 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:30:59 crc kubenswrapper[4858]: E0127 21:30:59.907840 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:31:01 crc kubenswrapper[4858]: I0127 21:31:01.564322 4858 scope.go:117] "RemoveContainer" containerID="68e09df089df7ffec73db64bd6882efe0ca038b02e11d81449c42e7808e42ed9" Jan 27 21:31:11 crc kubenswrapper[4858]: I0127 21:31:11.071015 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:31:11 crc kubenswrapper[4858]: E0127 21:31:11.072063 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:31:23 crc kubenswrapper[4858]: I0127 21:31:23.070876 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:31:23 crc kubenswrapper[4858]: E0127 21:31:23.071752 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:31:37 crc kubenswrapper[4858]: I0127 21:31:37.071455 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:31:37 crc kubenswrapper[4858]: E0127 21:31:37.072150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:31:48 crc kubenswrapper[4858]: I0127 21:31:48.071044 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:31:48 crc kubenswrapper[4858]: E0127 21:31:48.071977 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:32:00 crc kubenswrapper[4858]: I0127 21:32:00.071632 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:32:00 crc kubenswrapper[4858]: E0127 21:32:00.072434 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:32:11 crc kubenswrapper[4858]: I0127 21:32:11.070982 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:32:11 crc kubenswrapper[4858]: E0127 21:32:11.071945 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:32:25 crc kubenswrapper[4858]: I0127 21:32:25.071606 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:32:25 crc kubenswrapper[4858]: E0127 21:32:25.072484 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:32:39 crc kubenswrapper[4858]: I0127 21:32:39.071280 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:32:39 crc kubenswrapper[4858]: E0127 21:32:39.074240 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:32:54 crc kubenswrapper[4858]: I0127 21:32:54.071673 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:32:54 crc kubenswrapper[4858]: E0127 21:32:54.072905 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:33:08 crc kubenswrapper[4858]: I0127 21:33:08.071807 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:33:08 crc kubenswrapper[4858]: E0127 21:33:08.072744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:33:21 crc kubenswrapper[4858]: I0127 21:33:21.071480 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:33:21 crc kubenswrapper[4858]: E0127 21:33:21.072667 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:33:33 crc kubenswrapper[4858]: I0127 21:33:33.072810 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:33:33 crc kubenswrapper[4858]: E0127 21:33:33.074884 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:33:44 crc kubenswrapper[4858]: I0127 21:33:44.073543 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:33:44 crc kubenswrapper[4858]: E0127 21:33:44.074494 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:33:59 crc kubenswrapper[4858]: I0127 21:33:59.070724 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:33:59 crc kubenswrapper[4858]: E0127 21:33:59.071512 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:34:11 crc kubenswrapper[4858]: I0127 21:34:11.072350 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:34:11 crc kubenswrapper[4858]: E0127 21:34:11.073055 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.521717 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52jmb"] Jan 27 21:34:16 crc kubenswrapper[4858]: E0127 21:34:16.530686 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff78a17-d94b-4b1c-b787-6dc4f415a209" containerName="collect-profiles" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.530769 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff78a17-d94b-4b1c-b787-6dc4f415a209" containerName="collect-profiles" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.531948 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff78a17-d94b-4b1c-b787-6dc4f415a209" containerName="collect-profiles" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.540704 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.578909 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52jmb"] Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.734289 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-catalog-content\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.734663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5gl7\" (UniqueName: \"kubernetes.io/projected/11f8014c-1cf1-4206-80b5-143242ffb7c8-kube-api-access-s5gl7\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.734787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-utilities\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.836376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5gl7\" (UniqueName: \"kubernetes.io/projected/11f8014c-1cf1-4206-80b5-143242ffb7c8-kube-api-access-s5gl7\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.836527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-utilities\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.836634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-catalog-content\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.837052 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-utilities\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.837139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-catalog-content\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:16 crc kubenswrapper[4858]: I0127 21:34:16.882401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5gl7\" (UniqueName: \"kubernetes.io/projected/11f8014c-1cf1-4206-80b5-143242ffb7c8-kube-api-access-s5gl7\") pod \"community-operators-52jmb\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:17 crc kubenswrapper[4858]: I0127 21:34:17.180187 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:17 crc kubenswrapper[4858]: I0127 21:34:17.634159 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52jmb"] Jan 27 21:34:17 crc kubenswrapper[4858]: I0127 21:34:17.916127 4858 generic.go:334] "Generic (PLEG): container finished" podID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerID="61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b" exitCode=0 Jan 27 21:34:17 crc kubenswrapper[4858]: I0127 21:34:17.916197 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jmb" event={"ID":"11f8014c-1cf1-4206-80b5-143242ffb7c8","Type":"ContainerDied","Data":"61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b"} Jan 27 21:34:17 crc kubenswrapper[4858]: I0127 21:34:17.917026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jmb" event={"ID":"11f8014c-1cf1-4206-80b5-143242ffb7c8","Type":"ContainerStarted","Data":"78b1c13290e6415e4e90365748e1dd62bbb7ecfee41e99131f726924ea3d9912"} Jan 27 21:34:18 crc kubenswrapper[4858]: I0127 21:34:18.929309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jmb" event={"ID":"11f8014c-1cf1-4206-80b5-143242ffb7c8","Type":"ContainerStarted","Data":"6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119"} Jan 27 21:34:20 crc kubenswrapper[4858]: I0127 21:34:20.948271 4858 generic.go:334] "Generic (PLEG): container finished" podID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerID="6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119" exitCode=0 Jan 27 21:34:20 crc kubenswrapper[4858]: I0127 21:34:20.948371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jmb" event={"ID":"11f8014c-1cf1-4206-80b5-143242ffb7c8","Type":"ContainerDied","Data":"6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119"} Jan 27 21:34:21 crc kubenswrapper[4858]: I0127 21:34:21.959754 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jmb" event={"ID":"11f8014c-1cf1-4206-80b5-143242ffb7c8","Type":"ContainerStarted","Data":"1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87"} Jan 27 21:34:21 crc kubenswrapper[4858]: I0127 21:34:21.985135 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52jmb" podStartSLOduration=2.4804599720000002 podStartE2EDuration="5.985116208s" podCreationTimestamp="2026-01-27 21:34:16 +0000 UTC" firstStartedPulling="2026-01-27 21:34:17.918657841 +0000 UTC m=+5202.626473547" lastFinishedPulling="2026-01-27 21:34:21.423314087 +0000 UTC m=+5206.131129783" observedRunningTime="2026-01-27 21:34:21.976874857 +0000 UTC m=+5206.684690573" watchObservedRunningTime="2026-01-27 21:34:21.985116208 +0000 UTC m=+5206.692931914" Jan 27 21:34:22 crc kubenswrapper[4858]: I0127 21:34:22.071377 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:34:22 crc kubenswrapper[4858]: E0127 21:34:22.071698 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:34:26 crc kubenswrapper[4858]: I0127 21:34:26.956687 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wltfl"] Jan 27 21:34:26 crc kubenswrapper[4858]: I0127 21:34:26.968101 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:26 crc kubenswrapper[4858]: I0127 21:34:26.977335 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wltfl"] Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.041454 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-utilities\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.042029 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-catalog-content\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.042082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjjq\" (UniqueName: \"kubernetes.io/projected/93ae8318-51d2-41d5-9faf-4098be368144-kube-api-access-cnjjq\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.143378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-catalog-content\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.143478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjjq\" (UniqueName: \"kubernetes.io/projected/93ae8318-51d2-41d5-9faf-4098be368144-kube-api-access-cnjjq\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.143529 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-utilities\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.144283 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-utilities\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.144421 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-catalog-content\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.167748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjjq\" (UniqueName: \"kubernetes.io/projected/93ae8318-51d2-41d5-9faf-4098be368144-kube-api-access-cnjjq\") pod \"redhat-marketplace-wltfl\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.180973 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.181046 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.244304 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.299336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:27 crc kubenswrapper[4858]: W0127 21:34:27.795498 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93ae8318_51d2_41d5_9faf_4098be368144.slice/crio-405a76d425410ea36c335d7b55873e756cddfe196cee045e4492d0b4e5885d35 WatchSource:0}: Error finding container 405a76d425410ea36c335d7b55873e756cddfe196cee045e4492d0b4e5885d35: Status 404 returned error can't find the container with id 405a76d425410ea36c335d7b55873e756cddfe196cee045e4492d0b4e5885d35 Jan 27 21:34:27 crc kubenswrapper[4858]: I0127 21:34:27.805861 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wltfl"] Jan 27 21:34:28 crc kubenswrapper[4858]: I0127 21:34:28.015533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wltfl" event={"ID":"93ae8318-51d2-41d5-9faf-4098be368144","Type":"ContainerStarted","Data":"405a76d425410ea36c335d7b55873e756cddfe196cee045e4492d0b4e5885d35"} Jan 27 21:34:28 crc kubenswrapper[4858]: I0127 21:34:28.086636 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:29 crc kubenswrapper[4858]: I0127 21:34:29.062041 4858 generic.go:334] "Generic (PLEG): container finished" podID="93ae8318-51d2-41d5-9faf-4098be368144" containerID="4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da" exitCode=0 Jan 27 21:34:29 crc kubenswrapper[4858]: I0127 21:34:29.063305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wltfl" event={"ID":"93ae8318-51d2-41d5-9faf-4098be368144","Type":"ContainerDied","Data":"4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da"} Jan 27 21:34:29 crc kubenswrapper[4858]: I0127 21:34:29.537739 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52jmb"] Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.079367 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52jmb" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="registry-server" containerID="cri-o://1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87" gracePeriod=2 Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.748366 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.883900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-utilities\") pod \"11f8014c-1cf1-4206-80b5-143242ffb7c8\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.883964 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-catalog-content\") pod \"11f8014c-1cf1-4206-80b5-143242ffb7c8\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.884018 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5gl7\" (UniqueName: \"kubernetes.io/projected/11f8014c-1cf1-4206-80b5-143242ffb7c8-kube-api-access-s5gl7\") pod \"11f8014c-1cf1-4206-80b5-143242ffb7c8\" (UID: \"11f8014c-1cf1-4206-80b5-143242ffb7c8\") " Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.885635 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-utilities" (OuterVolumeSpecName: "utilities") pod "11f8014c-1cf1-4206-80b5-143242ffb7c8" (UID: "11f8014c-1cf1-4206-80b5-143242ffb7c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.898588 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f8014c-1cf1-4206-80b5-143242ffb7c8-kube-api-access-s5gl7" (OuterVolumeSpecName: "kube-api-access-s5gl7") pod "11f8014c-1cf1-4206-80b5-143242ffb7c8" (UID: "11f8014c-1cf1-4206-80b5-143242ffb7c8"). InnerVolumeSpecName "kube-api-access-s5gl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.987266 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.987518 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5gl7\" (UniqueName: \"kubernetes.io/projected/11f8014c-1cf1-4206-80b5-143242ffb7c8-kube-api-access-s5gl7\") on node \"crc\" DevicePath \"\"" Jan 27 21:34:31 crc kubenswrapper[4858]: I0127 21:34:31.997806 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11f8014c-1cf1-4206-80b5-143242ffb7c8" (UID: "11f8014c-1cf1-4206-80b5-143242ffb7c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.089505 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f8014c-1cf1-4206-80b5-143242ffb7c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.098663 4858 generic.go:334] "Generic (PLEG): container finished" podID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerID="1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87" exitCode=0 Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.098852 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jmb" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.104770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jmb" event={"ID":"11f8014c-1cf1-4206-80b5-143242ffb7c8","Type":"ContainerDied","Data":"1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87"} Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.104835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jmb" event={"ID":"11f8014c-1cf1-4206-80b5-143242ffb7c8","Type":"ContainerDied","Data":"78b1c13290e6415e4e90365748e1dd62bbb7ecfee41e99131f726924ea3d9912"} Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.104865 4858 scope.go:117] "RemoveContainer" containerID="1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.107042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wltfl" event={"ID":"93ae8318-51d2-41d5-9faf-4098be368144","Type":"ContainerStarted","Data":"7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7"} Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.141295 4858 scope.go:117] "RemoveContainer" containerID="6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.169732 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52jmb"] Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.169893 4858 scope.go:117] "RemoveContainer" containerID="61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.181782 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52jmb"] Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.220208 4858 scope.go:117] "RemoveContainer" containerID="1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87" Jan 27 21:34:32 crc kubenswrapper[4858]: E0127 21:34:32.220773 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87\": container with ID starting with 1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87 not found: ID does not exist" containerID="1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.220899 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87"} err="failed to get container status \"1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87\": rpc error: code = NotFound desc = could not find container \"1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87\": container with ID starting with 1ea626c2e99812528b05cba467ef838b1766f50067fa8c619d17c9cf7f0bac87 not found: ID does not exist" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.220987 4858 scope.go:117] "RemoveContainer" containerID="6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119" Jan 27 21:34:32 crc kubenswrapper[4858]: E0127 21:34:32.221274 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119\": container with ID starting with 6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119 not found: ID does not exist" containerID="6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.221372 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119"} err="failed to get container status \"6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119\": rpc error: code = NotFound desc = could not find container \"6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119\": container with ID starting with 6341790d6d57cbdf95b35fa99bb30fe004b9437a761b5dac273d41efc8023119 not found: ID does not exist" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.221455 4858 scope.go:117] "RemoveContainer" containerID="61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b" Jan 27 21:34:32 crc kubenswrapper[4858]: E0127 21:34:32.221954 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b\": container with ID starting with 61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b not found: ID does not exist" containerID="61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b" Jan 27 21:34:32 crc kubenswrapper[4858]: I0127 21:34:32.221986 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b"} err="failed to get container status \"61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b\": rpc error: code = NotFound desc = could not find container \"61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b\": container with ID starting with 61e613290510f22530a200d585d9e410d837ba516694beb821d5f3f1e441166b not found: ID does not exist" Jan 27 21:34:32 crc kubenswrapper[4858]: E0127 21:34:32.290383 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11f8014c_1cf1_4206_80b5_143242ffb7c8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11f8014c_1cf1_4206_80b5_143242ffb7c8.slice/crio-78b1c13290e6415e4e90365748e1dd62bbb7ecfee41e99131f726924ea3d9912\": RecentStats: unable to find data in memory cache]" Jan 27 21:34:33 crc kubenswrapper[4858]: I0127 21:34:33.123305 4858 generic.go:334] "Generic (PLEG): container finished" podID="93ae8318-51d2-41d5-9faf-4098be368144" containerID="7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7" exitCode=0 Jan 27 21:34:33 crc kubenswrapper[4858]: I0127 21:34:33.123417 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wltfl" event={"ID":"93ae8318-51d2-41d5-9faf-4098be368144","Type":"ContainerDied","Data":"7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7"} Jan 27 21:34:34 crc kubenswrapper[4858]: I0127 21:34:34.086820 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" path="/var/lib/kubelet/pods/11f8014c-1cf1-4206-80b5-143242ffb7c8/volumes" Jan 27 21:34:34 crc kubenswrapper[4858]: I0127 21:34:34.167114 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wltfl" event={"ID":"93ae8318-51d2-41d5-9faf-4098be368144","Type":"ContainerStarted","Data":"9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683"} Jan 27 21:34:34 crc kubenswrapper[4858]: I0127 21:34:34.196654 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wltfl" podStartSLOduration=3.724566844 podStartE2EDuration="8.196628911s" podCreationTimestamp="2026-01-27 21:34:26 +0000 UTC" firstStartedPulling="2026-01-27 21:34:29.067684411 +0000 UTC m=+5213.775500117" lastFinishedPulling="2026-01-27 21:34:33.539746478 +0000 UTC m=+5218.247562184" observedRunningTime="2026-01-27 21:34:34.192201446 +0000 UTC m=+5218.900017182" watchObservedRunningTime="2026-01-27 21:34:34.196628911 +0000 UTC m=+5218.904444627" Jan 27 21:34:37 crc kubenswrapper[4858]: I0127 21:34:37.071776 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:34:37 crc kubenswrapper[4858]: E0127 21:34:37.072565 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:34:37 crc kubenswrapper[4858]: I0127 21:34:37.299856 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:37 crc kubenswrapper[4858]: I0127 21:34:37.299919 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:37 crc kubenswrapper[4858]: I0127 21:34:37.348142 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:38 crc kubenswrapper[4858]: I0127 21:34:38.252165 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:38 crc kubenswrapper[4858]: I0127 21:34:38.723858 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wltfl"] Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.219115 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wltfl" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="registry-server" containerID="cri-o://9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683" gracePeriod=2 Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.725064 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.801478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnjjq\" (UniqueName: \"kubernetes.io/projected/93ae8318-51d2-41d5-9faf-4098be368144-kube-api-access-cnjjq\") pod \"93ae8318-51d2-41d5-9faf-4098be368144\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.801815 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-catalog-content\") pod \"93ae8318-51d2-41d5-9faf-4098be368144\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.801852 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-utilities\") pod \"93ae8318-51d2-41d5-9faf-4098be368144\" (UID: \"93ae8318-51d2-41d5-9faf-4098be368144\") " Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.804449 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-utilities" (OuterVolumeSpecName: "utilities") pod "93ae8318-51d2-41d5-9faf-4098be368144" (UID: "93ae8318-51d2-41d5-9faf-4098be368144"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.808842 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ae8318-51d2-41d5-9faf-4098be368144-kube-api-access-cnjjq" (OuterVolumeSpecName: "kube-api-access-cnjjq") pod "93ae8318-51d2-41d5-9faf-4098be368144" (UID: "93ae8318-51d2-41d5-9faf-4098be368144"). InnerVolumeSpecName "kube-api-access-cnjjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.836377 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93ae8318-51d2-41d5-9faf-4098be368144" (UID: "93ae8318-51d2-41d5-9faf-4098be368144"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.904316 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnjjq\" (UniqueName: \"kubernetes.io/projected/93ae8318-51d2-41d5-9faf-4098be368144-kube-api-access-cnjjq\") on node \"crc\" DevicePath \"\"" Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.904344 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:34:40 crc kubenswrapper[4858]: I0127 21:34:40.904353 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93ae8318-51d2-41d5-9faf-4098be368144-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.231968 4858 generic.go:334] "Generic (PLEG): container finished" podID="93ae8318-51d2-41d5-9faf-4098be368144" containerID="9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683" exitCode=0 Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.232055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wltfl" event={"ID":"93ae8318-51d2-41d5-9faf-4098be368144","Type":"ContainerDied","Data":"9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683"} Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.232071 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wltfl" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.232129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wltfl" event={"ID":"93ae8318-51d2-41d5-9faf-4098be368144","Type":"ContainerDied","Data":"405a76d425410ea36c335d7b55873e756cddfe196cee045e4492d0b4e5885d35"} Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.232158 4858 scope.go:117] "RemoveContainer" containerID="9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.256848 4858 scope.go:117] "RemoveContainer" containerID="7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.275218 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wltfl"] Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.287676 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wltfl"] Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.306073 4858 scope.go:117] "RemoveContainer" containerID="4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.335395 4858 scope.go:117] "RemoveContainer" containerID="9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683" Jan 27 21:34:41 crc kubenswrapper[4858]: E0127 21:34:41.336307 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683\": container with ID starting with 9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683 not found: ID does not exist" containerID="9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.336337 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683"} err="failed to get container status \"9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683\": rpc error: code = NotFound desc = could not find container \"9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683\": container with ID starting with 9ff2efd3c4b000cd9d4d59e0c9efce65001107e35693f1955c789f0015dd7683 not found: ID does not exist" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.336377 4858 scope.go:117] "RemoveContainer" containerID="7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7" Jan 27 21:34:41 crc kubenswrapper[4858]: E0127 21:34:41.336756 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7\": container with ID starting with 7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7 not found: ID does not exist" containerID="7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.336842 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7"} err="failed to get container status \"7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7\": rpc error: code = NotFound desc = could not find container \"7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7\": container with ID starting with 7d0ef3045a670dabdff3897e881ab828e5b89886e6df24d850ae90b8331addc7 not found: ID does not exist" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.336862 4858 scope.go:117] "RemoveContainer" containerID="4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da" Jan 27 21:34:41 crc kubenswrapper[4858]: E0127 21:34:41.337205 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da\": container with ID starting with 4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da not found: ID does not exist" containerID="4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da" Jan 27 21:34:41 crc kubenswrapper[4858]: I0127 21:34:41.337230 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da"} err="failed to get container status \"4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da\": rpc error: code = NotFound desc = could not find container \"4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da\": container with ID starting with 4c0171b01664abd014da7b8c6a17bd4e56646d0ef6325290b78748a50240a0da not found: ID does not exist" Jan 27 21:34:42 crc kubenswrapper[4858]: I0127 21:34:42.081942 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ae8318-51d2-41d5-9faf-4098be368144" path="/var/lib/kubelet/pods/93ae8318-51d2-41d5-9faf-4098be368144/volumes" Jan 27 21:34:50 crc kubenswrapper[4858]: I0127 21:34:50.071366 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:34:50 crc kubenswrapper[4858]: E0127 21:34:50.072254 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.769702 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4t22m"] Jan 27 21:34:51 crc kubenswrapper[4858]: E0127 21:34:51.770257 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="registry-server" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770273 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="registry-server" Jan 27 21:34:51 crc kubenswrapper[4858]: E0127 21:34:51.770301 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="extract-utilities" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770309 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="extract-utilities" Jan 27 21:34:51 crc kubenswrapper[4858]: E0127 21:34:51.770328 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="extract-content" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770336 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="extract-content" Jan 27 21:34:51 crc kubenswrapper[4858]: E0127 21:34:51.770371 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="extract-utilities" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770379 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="extract-utilities" Jan 27 21:34:51 crc kubenswrapper[4858]: E0127 21:34:51.770406 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="extract-content" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770414 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="extract-content" Jan 27 21:34:51 crc kubenswrapper[4858]: E0127 21:34:51.770424 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="registry-server" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770431 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="registry-server" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770666 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f8014c-1cf1-4206-80b5-143242ffb7c8" containerName="registry-server" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.770685 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ae8318-51d2-41d5-9faf-4098be368144" containerName="registry-server" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.772281 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.780785 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4t22m"] Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.831116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gws2\" (UniqueName: \"kubernetes.io/projected/13e782ef-b67c-499a-af9e-d6e076b757d7-kube-api-access-6gws2\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.831511 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-utilities\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.831783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-catalog-content\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.934019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gws2\" (UniqueName: \"kubernetes.io/projected/13e782ef-b67c-499a-af9e-d6e076b757d7-kube-api-access-6gws2\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.934178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-utilities\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.934215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-catalog-content\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.934707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-utilities\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.934829 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-catalog-content\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:51 crc kubenswrapper[4858]: I0127 21:34:51.961608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gws2\" (UniqueName: \"kubernetes.io/projected/13e782ef-b67c-499a-af9e-d6e076b757d7-kube-api-access-6gws2\") pod \"redhat-operators-4t22m\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:52 crc kubenswrapper[4858]: I0127 21:34:52.110372 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:34:52 crc kubenswrapper[4858]: I0127 21:34:52.649702 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4t22m"] Jan 27 21:34:53 crc kubenswrapper[4858]: I0127 21:34:53.345691 4858 generic.go:334] "Generic (PLEG): container finished" podID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerID="a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15" exitCode=0 Jan 27 21:34:53 crc kubenswrapper[4858]: I0127 21:34:53.345789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4t22m" event={"ID":"13e782ef-b67c-499a-af9e-d6e076b757d7","Type":"ContainerDied","Data":"a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15"} Jan 27 21:34:53 crc kubenswrapper[4858]: I0127 21:34:53.346037 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4t22m" event={"ID":"13e782ef-b67c-499a-af9e-d6e076b757d7","Type":"ContainerStarted","Data":"ceb96ff92e7bbc8fe903557eb06b0ee8d1fcd7b7e23e01fe838f7d2f4e8d9f15"} Jan 27 21:34:53 crc kubenswrapper[4858]: I0127 21:34:53.349506 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:34:54 crc kubenswrapper[4858]: I0127 21:34:54.357244 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4t22m" event={"ID":"13e782ef-b67c-499a-af9e-d6e076b757d7","Type":"ContainerStarted","Data":"e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023"} Jan 27 21:34:59 crc kubenswrapper[4858]: I0127 21:34:59.405479 4858 generic.go:334] "Generic (PLEG): container finished" podID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerID="e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023" exitCode=0 Jan 27 21:34:59 crc kubenswrapper[4858]: I0127 21:34:59.405563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4t22m" event={"ID":"13e782ef-b67c-499a-af9e-d6e076b757d7","Type":"ContainerDied","Data":"e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023"} Jan 27 21:35:00 crc kubenswrapper[4858]: I0127 21:35:00.415904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4t22m" event={"ID":"13e782ef-b67c-499a-af9e-d6e076b757d7","Type":"ContainerStarted","Data":"8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30"} Jan 27 21:35:00 crc kubenswrapper[4858]: I0127 21:35:00.438167 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4t22m" podStartSLOduration=2.960537411 podStartE2EDuration="9.438149209s" podCreationTimestamp="2026-01-27 21:34:51 +0000 UTC" firstStartedPulling="2026-01-27 21:34:53.34926857 +0000 UTC m=+5238.057084266" lastFinishedPulling="2026-01-27 21:34:59.826880358 +0000 UTC m=+5244.534696064" observedRunningTime="2026-01-27 21:35:00.434520447 +0000 UTC m=+5245.142336173" watchObservedRunningTime="2026-01-27 21:35:00.438149209 +0000 UTC m=+5245.145964915" Jan 27 21:35:01 crc kubenswrapper[4858]: I0127 21:35:01.070964 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:35:01 crc kubenswrapper[4858]: E0127 21:35:01.071467 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:35:02 crc kubenswrapper[4858]: I0127 21:35:02.110741 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:35:02 crc kubenswrapper[4858]: I0127 21:35:02.111836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:35:03 crc kubenswrapper[4858]: I0127 21:35:03.318176 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4t22m" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="registry-server" probeResult="failure" output=< Jan 27 21:35:03 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:35:03 crc kubenswrapper[4858]: > Jan 27 21:35:12 crc kubenswrapper[4858]: I0127 21:35:12.166764 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:35:12 crc kubenswrapper[4858]: I0127 21:35:12.223283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:35:12 crc kubenswrapper[4858]: I0127 21:35:12.413968 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4t22m"] Jan 27 21:35:13 crc kubenswrapper[4858]: I0127 21:35:13.550941 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4t22m" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="registry-server" containerID="cri-o://8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30" gracePeriod=2 Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.066347 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.071999 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:35:14 crc kubenswrapper[4858]: E0127 21:35:14.072334 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.156970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-catalog-content\") pod \"13e782ef-b67c-499a-af9e-d6e076b757d7\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.157206 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gws2\" (UniqueName: \"kubernetes.io/projected/13e782ef-b67c-499a-af9e-d6e076b757d7-kube-api-access-6gws2\") pod \"13e782ef-b67c-499a-af9e-d6e076b757d7\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.157342 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-utilities\") pod \"13e782ef-b67c-499a-af9e-d6e076b757d7\" (UID: \"13e782ef-b67c-499a-af9e-d6e076b757d7\") " Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.157968 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-utilities" (OuterVolumeSpecName: "utilities") pod "13e782ef-b67c-499a-af9e-d6e076b757d7" (UID: "13e782ef-b67c-499a-af9e-d6e076b757d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.178346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e782ef-b67c-499a-af9e-d6e076b757d7-kube-api-access-6gws2" (OuterVolumeSpecName: "kube-api-access-6gws2") pod "13e782ef-b67c-499a-af9e-d6e076b757d7" (UID: "13e782ef-b67c-499a-af9e-d6e076b757d7"). InnerVolumeSpecName "kube-api-access-6gws2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.260629 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.260678 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gws2\" (UniqueName: \"kubernetes.io/projected/13e782ef-b67c-499a-af9e-d6e076b757d7-kube-api-access-6gws2\") on node \"crc\" DevicePath \"\"" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.297458 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13e782ef-b67c-499a-af9e-d6e076b757d7" (UID: "13e782ef-b67c-499a-af9e-d6e076b757d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.363005 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e782ef-b67c-499a-af9e-d6e076b757d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.560862 4858 generic.go:334] "Generic (PLEG): container finished" podID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerID="8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30" exitCode=0 Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.560939 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4t22m" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.560953 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4t22m" event={"ID":"13e782ef-b67c-499a-af9e-d6e076b757d7","Type":"ContainerDied","Data":"8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30"} Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.562749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4t22m" event={"ID":"13e782ef-b67c-499a-af9e-d6e076b757d7","Type":"ContainerDied","Data":"ceb96ff92e7bbc8fe903557eb06b0ee8d1fcd7b7e23e01fe838f7d2f4e8d9f15"} Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.562778 4858 scope.go:117] "RemoveContainer" containerID="8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.584222 4858 scope.go:117] "RemoveContainer" containerID="e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.611422 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4t22m"] Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.621671 4858 scope.go:117] "RemoveContainer" containerID="a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.621989 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4t22m"] Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.656301 4858 scope.go:117] "RemoveContainer" containerID="8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30" Jan 27 21:35:14 crc kubenswrapper[4858]: E0127 21:35:14.657007 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30\": container with ID starting with 8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30 not found: ID does not exist" containerID="8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.657160 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30"} err="failed to get container status \"8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30\": rpc error: code = NotFound desc = could not find container \"8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30\": container with ID starting with 8b089774d8b78ede959110594b9f12ddeded5bab1cf100cc4173cc0e5b4bcf30 not found: ID does not exist" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.657188 4858 scope.go:117] "RemoveContainer" containerID="e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023" Jan 27 21:35:14 crc kubenswrapper[4858]: E0127 21:35:14.657763 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023\": container with ID starting with e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023 not found: ID does not exist" containerID="e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.657790 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023"} err="failed to get container status \"e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023\": rpc error: code = NotFound desc = could not find container \"e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023\": container with ID starting with e1ed82b351672293ab35ed225b8ad49323dbb1704ff469e0014d8c38da6c8023 not found: ID does not exist" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.657808 4858 scope.go:117] "RemoveContainer" containerID="a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15" Jan 27 21:35:14 crc kubenswrapper[4858]: E0127 21:35:14.658087 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15\": container with ID starting with a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15 not found: ID does not exist" containerID="a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15" Jan 27 21:35:14 crc kubenswrapper[4858]: I0127 21:35:14.658127 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15"} err="failed to get container status \"a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15\": rpc error: code = NotFound desc = could not find container \"a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15\": container with ID starting with a7c8860a46180dba9cce4ebd8eb1d57eae6145e33606211590d0d4fe73070e15 not found: ID does not exist" Jan 27 21:35:16 crc kubenswrapper[4858]: I0127 21:35:16.084418 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" path="/var/lib/kubelet/pods/13e782ef-b67c-499a-af9e-d6e076b757d7/volumes" Jan 27 21:35:25 crc kubenswrapper[4858]: I0127 21:35:25.072353 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:35:25 crc kubenswrapper[4858]: E0127 21:35:25.073133 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:35:39 crc kubenswrapper[4858]: I0127 21:35:39.072297 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:35:39 crc kubenswrapper[4858]: E0127 21:35:39.073157 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:35:53 crc kubenswrapper[4858]: I0127 21:35:53.072062 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:35:53 crc kubenswrapper[4858]: E0127 21:35:53.073060 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:36:07 crc kubenswrapper[4858]: I0127 21:36:07.071138 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:36:08 crc kubenswrapper[4858]: I0127 21:36:08.081383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"698ab4411b28ea8110afc7f1b8c8ad80a5585b0249a399b46f6d6b5798b379ff"} Jan 27 21:38:29 crc kubenswrapper[4858]: I0127 21:38:29.328315 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:38:29 crc kubenswrapper[4858]: I0127 21:38:29.328880 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:38:59 crc kubenswrapper[4858]: I0127 21:38:59.329851 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:38:59 crc kubenswrapper[4858]: I0127 21:38:59.330447 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:39:29 crc kubenswrapper[4858]: I0127 21:39:29.329988 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:39:29 crc kubenswrapper[4858]: I0127 21:39:29.330501 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:39:29 crc kubenswrapper[4858]: I0127 21:39:29.330546 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:39:29 crc kubenswrapper[4858]: I0127 21:39:29.331392 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"698ab4411b28ea8110afc7f1b8c8ad80a5585b0249a399b46f6d6b5798b379ff"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:39:29 crc kubenswrapper[4858]: I0127 21:39:29.331460 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://698ab4411b28ea8110afc7f1b8c8ad80a5585b0249a399b46f6d6b5798b379ff" gracePeriod=600 Jan 27 21:39:30 crc kubenswrapper[4858]: I0127 21:39:30.072367 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="698ab4411b28ea8110afc7f1b8c8ad80a5585b0249a399b46f6d6b5798b379ff" exitCode=0 Jan 27 21:39:30 crc kubenswrapper[4858]: I0127 21:39:30.083237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"698ab4411b28ea8110afc7f1b8c8ad80a5585b0249a399b46f6d6b5798b379ff"} Jan 27 21:39:30 crc kubenswrapper[4858]: I0127 21:39:30.083284 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5"} Jan 27 21:39:30 crc kubenswrapper[4858]: I0127 21:39:30.083305 4858 scope.go:117] "RemoveContainer" containerID="41a5e8c205a7554e3ed7e5982577574e35468ff7e05ec6f2c8c2f9c621e4c589" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.820970 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fjpp2"] Jan 27 21:41:07 crc kubenswrapper[4858]: E0127 21:41:07.822037 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="extract-utilities" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.822056 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="extract-utilities" Jan 27 21:41:07 crc kubenswrapper[4858]: E0127 21:41:07.822073 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="extract-content" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.822082 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="extract-content" Jan 27 21:41:07 crc kubenswrapper[4858]: E0127 21:41:07.822112 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="registry-server" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.822121 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="registry-server" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.822388 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e782ef-b67c-499a-af9e-d6e076b757d7" containerName="registry-server" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.834145 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.838859 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjpp2"] Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.917824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-utilities\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.917874 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-catalog-content\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:07 crc kubenswrapper[4858]: I0127 21:41:07.917908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db2x9\" (UniqueName: \"kubernetes.io/projected/a3a26b5f-a77f-467c-8661-00420f7d47e2-kube-api-access-db2x9\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.021382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-utilities\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.021478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-catalog-content\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.021590 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db2x9\" (UniqueName: \"kubernetes.io/projected/a3a26b5f-a77f-467c-8661-00420f7d47e2-kube-api-access-db2x9\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.021961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-utilities\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.021992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-catalog-content\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.043055 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db2x9\" (UniqueName: \"kubernetes.io/projected/a3a26b5f-a77f-467c-8661-00420f7d47e2-kube-api-access-db2x9\") pod \"certified-operators-fjpp2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.198111 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:08 crc kubenswrapper[4858]: I0127 21:41:08.789920 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjpp2"] Jan 27 21:41:09 crc kubenswrapper[4858]: I0127 21:41:09.155917 4858 generic.go:334] "Generic (PLEG): container finished" podID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerID="b452035effb2480ba78e76a4bbb9b78f6e1f13ff7b498b7c6437c14c9c85703b" exitCode=0 Jan 27 21:41:09 crc kubenswrapper[4858]: I0127 21:41:09.156210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjpp2" event={"ID":"a3a26b5f-a77f-467c-8661-00420f7d47e2","Type":"ContainerDied","Data":"b452035effb2480ba78e76a4bbb9b78f6e1f13ff7b498b7c6437c14c9c85703b"} Jan 27 21:41:09 crc kubenswrapper[4858]: I0127 21:41:09.156240 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjpp2" event={"ID":"a3a26b5f-a77f-467c-8661-00420f7d47e2","Type":"ContainerStarted","Data":"693af705b9d4f6065c976f575665b33a0567988e36bdd1e7f3a6ff3fc88753b9"} Jan 27 21:41:09 crc kubenswrapper[4858]: I0127 21:41:09.159444 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:41:10 crc kubenswrapper[4858]: I0127 21:41:10.166132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjpp2" event={"ID":"a3a26b5f-a77f-467c-8661-00420f7d47e2","Type":"ContainerStarted","Data":"c6135955f664cb20028ebf64edb56188b278fe5640c39055619f050ab7a242db"} Jan 27 21:41:11 crc kubenswrapper[4858]: I0127 21:41:11.184155 4858 generic.go:334] "Generic (PLEG): container finished" podID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerID="c6135955f664cb20028ebf64edb56188b278fe5640c39055619f050ab7a242db" exitCode=0 Jan 27 21:41:11 crc kubenswrapper[4858]: I0127 21:41:11.184504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjpp2" event={"ID":"a3a26b5f-a77f-467c-8661-00420f7d47e2","Type":"ContainerDied","Data":"c6135955f664cb20028ebf64edb56188b278fe5640c39055619f050ab7a242db"} Jan 27 21:41:12 crc kubenswrapper[4858]: I0127 21:41:12.200830 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjpp2" event={"ID":"a3a26b5f-a77f-467c-8661-00420f7d47e2","Type":"ContainerStarted","Data":"08f0181cd6c56467906e2a792071c15874a040a9b572cad34e711d66f1cb4eb0"} Jan 27 21:41:12 crc kubenswrapper[4858]: I0127 21:41:12.241904 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fjpp2" podStartSLOduration=2.792823629 podStartE2EDuration="5.241883075s" podCreationTimestamp="2026-01-27 21:41:07 +0000 UTC" firstStartedPulling="2026-01-27 21:41:09.1592136 +0000 UTC m=+5613.867029306" lastFinishedPulling="2026-01-27 21:41:11.608273036 +0000 UTC m=+5616.316088752" observedRunningTime="2026-01-27 21:41:12.231995857 +0000 UTC m=+5616.939811593" watchObservedRunningTime="2026-01-27 21:41:12.241883075 +0000 UTC m=+5616.949698781" Jan 27 21:41:18 crc kubenswrapper[4858]: I0127 21:41:18.199318 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:18 crc kubenswrapper[4858]: I0127 21:41:18.199735 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:18 crc kubenswrapper[4858]: I0127 21:41:18.251398 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:18 crc kubenswrapper[4858]: I0127 21:41:18.316474 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:18 crc kubenswrapper[4858]: I0127 21:41:18.488006 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjpp2"] Jan 27 21:41:20 crc kubenswrapper[4858]: I0127 21:41:20.287312 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fjpp2" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="registry-server" containerID="cri-o://08f0181cd6c56467906e2a792071c15874a040a9b572cad34e711d66f1cb4eb0" gracePeriod=2 Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.315167 4858 generic.go:334] "Generic (PLEG): container finished" podID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerID="08f0181cd6c56467906e2a792071c15874a040a9b572cad34e711d66f1cb4eb0" exitCode=0 Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.315251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjpp2" event={"ID":"a3a26b5f-a77f-467c-8661-00420f7d47e2","Type":"ContainerDied","Data":"08f0181cd6c56467906e2a792071c15874a040a9b572cad34e711d66f1cb4eb0"} Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.315826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjpp2" event={"ID":"a3a26b5f-a77f-467c-8661-00420f7d47e2","Type":"ContainerDied","Data":"693af705b9d4f6065c976f575665b33a0567988e36bdd1e7f3a6ff3fc88753b9"} Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.315853 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="693af705b9d4f6065c976f575665b33a0567988e36bdd1e7f3a6ff3fc88753b9" Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.352017 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.451736 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-utilities\") pod \"a3a26b5f-a77f-467c-8661-00420f7d47e2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.451825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-catalog-content\") pod \"a3a26b5f-a77f-467c-8661-00420f7d47e2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.452150 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db2x9\" (UniqueName: \"kubernetes.io/projected/a3a26b5f-a77f-467c-8661-00420f7d47e2-kube-api-access-db2x9\") pod \"a3a26b5f-a77f-467c-8661-00420f7d47e2\" (UID: \"a3a26b5f-a77f-467c-8661-00420f7d47e2\") " Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.453297 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-utilities" (OuterVolumeSpecName: "utilities") pod "a3a26b5f-a77f-467c-8661-00420f7d47e2" (UID: "a3a26b5f-a77f-467c-8661-00420f7d47e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.459051 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3a26b5f-a77f-467c-8661-00420f7d47e2-kube-api-access-db2x9" (OuterVolumeSpecName: "kube-api-access-db2x9") pod "a3a26b5f-a77f-467c-8661-00420f7d47e2" (UID: "a3a26b5f-a77f-467c-8661-00420f7d47e2"). InnerVolumeSpecName "kube-api-access-db2x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.506847 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3a26b5f-a77f-467c-8661-00420f7d47e2" (UID: "a3a26b5f-a77f-467c-8661-00420f7d47e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.555367 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.555414 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3a26b5f-a77f-467c-8661-00420f7d47e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:21 crc kubenswrapper[4858]: I0127 21:41:21.555432 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db2x9\" (UniqueName: \"kubernetes.io/projected/a3a26b5f-a77f-467c-8661-00420f7d47e2-kube-api-access-db2x9\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:22 crc kubenswrapper[4858]: I0127 21:41:22.322845 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjpp2" Jan 27 21:41:22 crc kubenswrapper[4858]: I0127 21:41:22.345907 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjpp2"] Jan 27 21:41:22 crc kubenswrapper[4858]: I0127 21:41:22.360533 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fjpp2"] Jan 27 21:41:24 crc kubenswrapper[4858]: I0127 21:41:24.090181 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" path="/var/lib/kubelet/pods/a3a26b5f-a77f-467c-8661-00420f7d47e2/volumes" Jan 27 21:41:29 crc kubenswrapper[4858]: I0127 21:41:29.328918 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:41:29 crc kubenswrapper[4858]: I0127 21:41:29.330026 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:41:57 crc kubenswrapper[4858]: I0127 21:41:57.669965 4858 generic.go:334] "Generic (PLEG): container finished" podID="0671e111-61e9-439b-9457-c29b7d18a1f7" containerID="35ed16ec0d2db2d7ec11649d4c05327e45956530b22b23107153667079075a32" exitCode=0 Jan 27 21:41:57 crc kubenswrapper[4858]: I0127 21:41:57.670079 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0671e111-61e9-439b-9457-c29b7d18a1f7","Type":"ContainerDied","Data":"35ed16ec0d2db2d7ec11649d4c05327e45956530b22b23107153667079075a32"} Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.086695 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.256528 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-temporary\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.256624 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config-secret\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.256687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9mw\" (UniqueName: \"kubernetes.io/projected/0671e111-61e9-439b-9457-c29b7d18a1f7-kube-api-access-5x9mw\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.256753 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ssh-key\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.256837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.256935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-config-data\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.256958 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.257024 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-workdir\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.257160 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ca-certs\") pod \"0671e111-61e9-439b-9457-c29b7d18a1f7\" (UID: \"0671e111-61e9-439b-9457-c29b7d18a1f7\") " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.257297 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.257832 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-config-data" (OuterVolumeSpecName: "config-data") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.257855 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.265405 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0671e111-61e9-439b-9457-c29b7d18a1f7-kube-api-access-5x9mw" (OuterVolumeSpecName: "kube-api-access-5x9mw") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "kube-api-access-5x9mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.267372 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.269384 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.290083 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.295907 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.302406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.328630 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.328967 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.340417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0671e111-61e9-439b-9457-c29b7d18a1f7" (UID: "0671e111-61e9-439b-9457-c29b7d18a1f7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359706 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359756 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359771 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0671e111-61e9-439b-9457-c29b7d18a1f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359817 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359833 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0671e111-61e9-439b-9457-c29b7d18a1f7-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359845 4858 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359856 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0671e111-61e9-439b-9457-c29b7d18a1f7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.359868 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9mw\" (UniqueName: \"kubernetes.io/projected/0671e111-61e9-439b-9457-c29b7d18a1f7-kube-api-access-5x9mw\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.385003 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.469986 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.692361 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0671e111-61e9-439b-9457-c29b7d18a1f7","Type":"ContainerDied","Data":"e6f8cce5105724c06ad3a727828346d0c0a742e58a1e428fafdd052e8d207f80"} Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.692426 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f8cce5105724c06ad3a727828346d0c0a742e58a1e428fafdd052e8d207f80" Jan 27 21:41:59 crc kubenswrapper[4858]: I0127 21:41:59.692500 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.398252 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 21:42:02 crc kubenswrapper[4858]: E0127 21:42:02.399009 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="extract-content" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.399024 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="extract-content" Jan 27 21:42:02 crc kubenswrapper[4858]: E0127 21:42:02.399058 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0671e111-61e9-439b-9457-c29b7d18a1f7" containerName="tempest-tests-tempest-tests-runner" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.399065 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0671e111-61e9-439b-9457-c29b7d18a1f7" containerName="tempest-tests-tempest-tests-runner" Jan 27 21:42:02 crc kubenswrapper[4858]: E0127 21:42:02.399085 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="extract-utilities" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.399091 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="extract-utilities" Jan 27 21:42:02 crc kubenswrapper[4858]: E0127 21:42:02.399111 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="registry-server" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.399117 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="registry-server" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.399300 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3a26b5f-a77f-467c-8661-00420f7d47e2" containerName="registry-server" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.399323 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0671e111-61e9-439b-9457-c29b7d18a1f7" containerName="tempest-tests-tempest-tests-runner" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.400061 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.402584 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-g529f" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.409661 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.557091 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jk4\" (UniqueName: \"kubernetes.io/projected/4bc44a79-c7d4-472d-9d17-2b69e894630f-kube-api-access-t5jk4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4bc44a79-c7d4-472d-9d17-2b69e894630f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.557266 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4bc44a79-c7d4-472d-9d17-2b69e894630f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.659571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5jk4\" (UniqueName: \"kubernetes.io/projected/4bc44a79-c7d4-472d-9d17-2b69e894630f-kube-api-access-t5jk4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4bc44a79-c7d4-472d-9d17-2b69e894630f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.659967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4bc44a79-c7d4-472d-9d17-2b69e894630f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.660452 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4bc44a79-c7d4-472d-9d17-2b69e894630f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.685831 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5jk4\" (UniqueName: \"kubernetes.io/projected/4bc44a79-c7d4-472d-9d17-2b69e894630f-kube-api-access-t5jk4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4bc44a79-c7d4-472d-9d17-2b69e894630f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.690333 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4bc44a79-c7d4-472d-9d17-2b69e894630f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:02 crc kubenswrapper[4858]: I0127 21:42:02.778619 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 21:42:03 crc kubenswrapper[4858]: I0127 21:42:03.294475 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 21:42:03 crc kubenswrapper[4858]: I0127 21:42:03.899382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4bc44a79-c7d4-472d-9d17-2b69e894630f","Type":"ContainerStarted","Data":"29cc4308a7b7ddd752294e043544590811657183b2342adbcdbbea3b105dc448"} Jan 27 21:42:04 crc kubenswrapper[4858]: I0127 21:42:04.913011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4bc44a79-c7d4-472d-9d17-2b69e894630f","Type":"ContainerStarted","Data":"1ca4fd00259809c0e318edb31b46aea2ec6c6454aed71cb16c452ff7811c04f3"} Jan 27 21:42:04 crc kubenswrapper[4858]: I0127 21:42:04.932275 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.848078095 podStartE2EDuration="2.932258519s" podCreationTimestamp="2026-01-27 21:42:02 +0000 UTC" firstStartedPulling="2026-01-27 21:42:03.300062002 +0000 UTC m=+5668.007877728" lastFinishedPulling="2026-01-27 21:42:04.384242446 +0000 UTC m=+5669.092058152" observedRunningTime="2026-01-27 21:42:04.925308503 +0000 UTC m=+5669.633124209" watchObservedRunningTime="2026-01-27 21:42:04.932258519 +0000 UTC m=+5669.640074225" Jan 27 21:42:29 crc kubenswrapper[4858]: I0127 21:42:29.328590 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:42:29 crc kubenswrapper[4858]: I0127 21:42:29.329225 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:42:29 crc kubenswrapper[4858]: I0127 21:42:29.329297 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:42:29 crc kubenswrapper[4858]: I0127 21:42:29.330274 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:42:29 crc kubenswrapper[4858]: I0127 21:42:29.330356 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" gracePeriod=600 Jan 27 21:42:29 crc kubenswrapper[4858]: E0127 21:42:29.458673 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.198226 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" exitCode=0 Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.198514 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5"} Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.198561 4858 scope.go:117] "RemoveContainer" containerID="698ab4411b28ea8110afc7f1b8c8ad80a5585b0249a399b46f6d6b5798b379ff" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.199226 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:42:30 crc kubenswrapper[4858]: E0127 21:42:30.199462 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.231147 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jkw9t/must-gather-l29cg"] Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.232841 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.236846 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jkw9t"/"default-dockercfg-cg22v" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.236923 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jkw9t"/"kube-root-ca.crt" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.237048 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jkw9t"/"openshift-service-ca.crt" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.301863 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jkw9t/must-gather-l29cg"] Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.332867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwzqs\" (UniqueName: \"kubernetes.io/projected/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-kube-api-access-vwzqs\") pod \"must-gather-l29cg\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.332925 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-must-gather-output\") pod \"must-gather-l29cg\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.435399 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwzqs\" (UniqueName: \"kubernetes.io/projected/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-kube-api-access-vwzqs\") pod \"must-gather-l29cg\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.435456 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-must-gather-output\") pod \"must-gather-l29cg\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.435894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-must-gather-output\") pod \"must-gather-l29cg\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.478213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwzqs\" (UniqueName: \"kubernetes.io/projected/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-kube-api-access-vwzqs\") pod \"must-gather-l29cg\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:30 crc kubenswrapper[4858]: I0127 21:42:30.561918 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:42:31 crc kubenswrapper[4858]: I0127 21:42:31.041299 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jkw9t/must-gather-l29cg"] Jan 27 21:42:31 crc kubenswrapper[4858]: W0127 21:42:31.047623 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5d56ed1_a59f_43b9_a3fa_0148df4f8909.slice/crio-64d94a5bf6a370ffa22c672c98cbb18b5464c20e81128a7fa035a57b7e920229 WatchSource:0}: Error finding container 64d94a5bf6a370ffa22c672c98cbb18b5464c20e81128a7fa035a57b7e920229: Status 404 returned error can't find the container with id 64d94a5bf6a370ffa22c672c98cbb18b5464c20e81128a7fa035a57b7e920229 Jan 27 21:42:31 crc kubenswrapper[4858]: I0127 21:42:31.213322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/must-gather-l29cg" event={"ID":"c5d56ed1-a59f-43b9-a3fa-0148df4f8909","Type":"ContainerStarted","Data":"64d94a5bf6a370ffa22c672c98cbb18b5464c20e81128a7fa035a57b7e920229"} Jan 27 21:42:38 crc kubenswrapper[4858]: I0127 21:42:38.296919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/must-gather-l29cg" event={"ID":"c5d56ed1-a59f-43b9-a3fa-0148df4f8909","Type":"ContainerStarted","Data":"c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc"} Jan 27 21:42:38 crc kubenswrapper[4858]: I0127 21:42:38.297389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/must-gather-l29cg" event={"ID":"c5d56ed1-a59f-43b9-a3fa-0148df4f8909","Type":"ContainerStarted","Data":"aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76"} Jan 27 21:42:38 crc kubenswrapper[4858]: I0127 21:42:38.325520 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jkw9t/must-gather-l29cg" podStartSLOduration=1.859367276 podStartE2EDuration="8.325504781s" podCreationTimestamp="2026-01-27 21:42:30 +0000 UTC" firstStartedPulling="2026-01-27 21:42:31.05545723 +0000 UTC m=+5695.763272976" lastFinishedPulling="2026-01-27 21:42:37.521594775 +0000 UTC m=+5702.229410481" observedRunningTime="2026-01-27 21:42:38.319648827 +0000 UTC m=+5703.027464533" watchObservedRunningTime="2026-01-27 21:42:38.325504781 +0000 UTC m=+5703.033320487" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.084879 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-9gmjt"] Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.091764 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.190630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb075fdd-e3e6-44b6-a09b-ea2000100848-host\") pod \"crc-debug-9gmjt\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.190792 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6nvj\" (UniqueName: \"kubernetes.io/projected/cb075fdd-e3e6-44b6-a09b-ea2000100848-kube-api-access-c6nvj\") pod \"crc-debug-9gmjt\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.293125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb075fdd-e3e6-44b6-a09b-ea2000100848-host\") pod \"crc-debug-9gmjt\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.295407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6nvj\" (UniqueName: \"kubernetes.io/projected/cb075fdd-e3e6-44b6-a09b-ea2000100848-kube-api-access-c6nvj\") pod \"crc-debug-9gmjt\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.293950 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb075fdd-e3e6-44b6-a09b-ea2000100848-host\") pod \"crc-debug-9gmjt\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.326227 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6nvj\" (UniqueName: \"kubernetes.io/projected/cb075fdd-e3e6-44b6-a09b-ea2000100848-kube-api-access-c6nvj\") pod \"crc-debug-9gmjt\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: I0127 21:42:42.420520 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:42:42 crc kubenswrapper[4858]: W0127 21:42:42.459634 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb075fdd_e3e6_44b6_a09b_ea2000100848.slice/crio-c12c210d4a58acaef6acd4880a73830cc0ccf7e15509b10ae41b9b1cd6e20b28 WatchSource:0}: Error finding container c12c210d4a58acaef6acd4880a73830cc0ccf7e15509b10ae41b9b1cd6e20b28: Status 404 returned error can't find the container with id c12c210d4a58acaef6acd4880a73830cc0ccf7e15509b10ae41b9b1cd6e20b28 Jan 27 21:42:43 crc kubenswrapper[4858]: I0127 21:42:43.367367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" event={"ID":"cb075fdd-e3e6-44b6-a09b-ea2000100848","Type":"ContainerStarted","Data":"c12c210d4a58acaef6acd4880a73830cc0ccf7e15509b10ae41b9b1cd6e20b28"} Jan 27 21:42:45 crc kubenswrapper[4858]: I0127 21:42:45.071682 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:42:45 crc kubenswrapper[4858]: E0127 21:42:45.072464 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:42:54 crc kubenswrapper[4858]: I0127 21:42:54.487962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" event={"ID":"cb075fdd-e3e6-44b6-a09b-ea2000100848","Type":"ContainerStarted","Data":"9ac83be933e902d43b6b9dbb1e5baad37ed239af1ed08d81f8bb9e107e35bd92"} Jan 27 21:42:54 crc kubenswrapper[4858]: I0127 21:42:54.503999 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" podStartSLOduration=1.177020894 podStartE2EDuration="12.503983514s" podCreationTimestamp="2026-01-27 21:42:42 +0000 UTC" firstStartedPulling="2026-01-27 21:42:42.46245956 +0000 UTC m=+5707.170275266" lastFinishedPulling="2026-01-27 21:42:53.78942218 +0000 UTC m=+5718.497237886" observedRunningTime="2026-01-27 21:42:54.50384092 +0000 UTC m=+5719.211656636" watchObservedRunningTime="2026-01-27 21:42:54.503983514 +0000 UTC m=+5719.211799220" Jan 27 21:43:00 crc kubenswrapper[4858]: I0127 21:43:00.071267 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:43:00 crc kubenswrapper[4858]: E0127 21:43:00.072133 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:43:13 crc kubenswrapper[4858]: I0127 21:43:13.071057 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:43:13 crc kubenswrapper[4858]: E0127 21:43:13.071985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:43:25 crc kubenswrapper[4858]: I0127 21:43:25.071504 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:43:25 crc kubenswrapper[4858]: E0127 21:43:25.072420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:43:40 crc kubenswrapper[4858]: I0127 21:43:40.071714 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:43:40 crc kubenswrapper[4858]: E0127 21:43:40.074073 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:43:47 crc kubenswrapper[4858]: I0127 21:43:47.987853 4858 generic.go:334] "Generic (PLEG): container finished" podID="cb075fdd-e3e6-44b6-a09b-ea2000100848" containerID="9ac83be933e902d43b6b9dbb1e5baad37ed239af1ed08d81f8bb9e107e35bd92" exitCode=0 Jan 27 21:43:47 crc kubenswrapper[4858]: I0127 21:43:47.987888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" event={"ID":"cb075fdd-e3e6-44b6-a09b-ea2000100848","Type":"ContainerDied","Data":"9ac83be933e902d43b6b9dbb1e5baad37ed239af1ed08d81f8bb9e107e35bd92"} Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.160539 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.201722 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-9gmjt"] Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.212157 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-9gmjt"] Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.263290 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb075fdd-e3e6-44b6-a09b-ea2000100848-host\") pod \"cb075fdd-e3e6-44b6-a09b-ea2000100848\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.263350 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb075fdd-e3e6-44b6-a09b-ea2000100848-host" (OuterVolumeSpecName: "host") pod "cb075fdd-e3e6-44b6-a09b-ea2000100848" (UID: "cb075fdd-e3e6-44b6-a09b-ea2000100848"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.263438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6nvj\" (UniqueName: \"kubernetes.io/projected/cb075fdd-e3e6-44b6-a09b-ea2000100848-kube-api-access-c6nvj\") pod \"cb075fdd-e3e6-44b6-a09b-ea2000100848\" (UID: \"cb075fdd-e3e6-44b6-a09b-ea2000100848\") " Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.263938 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb075fdd-e3e6-44b6-a09b-ea2000100848-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.270753 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb075fdd-e3e6-44b6-a09b-ea2000100848-kube-api-access-c6nvj" (OuterVolumeSpecName: "kube-api-access-c6nvj") pod "cb075fdd-e3e6-44b6-a09b-ea2000100848" (UID: "cb075fdd-e3e6-44b6-a09b-ea2000100848"). InnerVolumeSpecName "kube-api-access-c6nvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:43:49 crc kubenswrapper[4858]: I0127 21:43:49.365888 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6nvj\" (UniqueName: \"kubernetes.io/projected/cb075fdd-e3e6-44b6-a09b-ea2000100848-kube-api-access-c6nvj\") on node \"crc\" DevicePath \"\"" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.007408 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12c210d4a58acaef6acd4880a73830cc0ccf7e15509b10ae41b9b1cd6e20b28" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.007451 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-9gmjt" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.095094 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb075fdd-e3e6-44b6-a09b-ea2000100848" path="/var/lib/kubelet/pods/cb075fdd-e3e6-44b6-a09b-ea2000100848/volumes" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.390256 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-h5tg9"] Jan 27 21:43:50 crc kubenswrapper[4858]: E0127 21:43:50.391844 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb075fdd-e3e6-44b6-a09b-ea2000100848" containerName="container-00" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.391932 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb075fdd-e3e6-44b6-a09b-ea2000100848" containerName="container-00" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.392200 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb075fdd-e3e6-44b6-a09b-ea2000100848" containerName="container-00" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.393215 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.486868 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-host\") pod \"crc-debug-h5tg9\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.487185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hwzn\" (UniqueName: \"kubernetes.io/projected/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-kube-api-access-4hwzn\") pod \"crc-debug-h5tg9\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.589294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-host\") pod \"crc-debug-h5tg9\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.589440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hwzn\" (UniqueName: \"kubernetes.io/projected/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-kube-api-access-4hwzn\") pod \"crc-debug-h5tg9\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.589458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-host\") pod \"crc-debug-h5tg9\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.613442 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hwzn\" (UniqueName: \"kubernetes.io/projected/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-kube-api-access-4hwzn\") pod \"crc-debug-h5tg9\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:50 crc kubenswrapper[4858]: I0127 21:43:50.711039 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:51 crc kubenswrapper[4858]: I0127 21:43:51.016953 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" event={"ID":"57ed1d5b-322e-46b9-8f8e-2810a9a55afb","Type":"ContainerStarted","Data":"4420a1f1419bbe490bc365667eb59d3e64e4d1dfb583748b1462caa36155d6d6"} Jan 27 21:43:51 crc kubenswrapper[4858]: I0127 21:43:51.017293 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" event={"ID":"57ed1d5b-322e-46b9-8f8e-2810a9a55afb","Type":"ContainerStarted","Data":"1507ffa102bf8eba774db31e90e4045c79f0b026fc403e37e56332840bdff278"} Jan 27 21:43:51 crc kubenswrapper[4858]: I0127 21:43:51.031972 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" podStartSLOduration=1.031947695 podStartE2EDuration="1.031947695s" podCreationTimestamp="2026-01-27 21:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:43:51.029043763 +0000 UTC m=+5775.736859499" watchObservedRunningTime="2026-01-27 21:43:51.031947695 +0000 UTC m=+5775.739763411" Jan 27 21:43:52 crc kubenswrapper[4858]: I0127 21:43:52.039495 4858 generic.go:334] "Generic (PLEG): container finished" podID="57ed1d5b-322e-46b9-8f8e-2810a9a55afb" containerID="4420a1f1419bbe490bc365667eb59d3e64e4d1dfb583748b1462caa36155d6d6" exitCode=0 Jan 27 21:43:52 crc kubenswrapper[4858]: I0127 21:43:52.039881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" event={"ID":"57ed1d5b-322e-46b9-8f8e-2810a9a55afb","Type":"ContainerDied","Data":"4420a1f1419bbe490bc365667eb59d3e64e4d1dfb583748b1462caa36155d6d6"} Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.070674 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:43:53 crc kubenswrapper[4858]: E0127 21:43:53.071048 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.165834 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.230816 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-host\") pod \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.231323 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hwzn\" (UniqueName: \"kubernetes.io/projected/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-kube-api-access-4hwzn\") pod \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\" (UID: \"57ed1d5b-322e-46b9-8f8e-2810a9a55afb\") " Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.232816 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-host" (OuterVolumeSpecName: "host") pod "57ed1d5b-322e-46b9-8f8e-2810a9a55afb" (UID: "57ed1d5b-322e-46b9-8f8e-2810a9a55afb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.237290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-kube-api-access-4hwzn" (OuterVolumeSpecName: "kube-api-access-4hwzn") pod "57ed1d5b-322e-46b9-8f8e-2810a9a55afb" (UID: "57ed1d5b-322e-46b9-8f8e-2810a9a55afb"). InnerVolumeSpecName "kube-api-access-4hwzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.333981 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.334037 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hwzn\" (UniqueName: \"kubernetes.io/projected/57ed1d5b-322e-46b9-8f8e-2810a9a55afb-kube-api-access-4hwzn\") on node \"crc\" DevicePath \"\"" Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.543926 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-h5tg9"] Jan 27 21:43:53 crc kubenswrapper[4858]: I0127 21:43:53.552203 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-h5tg9"] Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.061058 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1507ffa102bf8eba774db31e90e4045c79f0b026fc403e37e56332840bdff278" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.061378 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-h5tg9" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.101814 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57ed1d5b-322e-46b9-8f8e-2810a9a55afb" path="/var/lib/kubelet/pods/57ed1d5b-322e-46b9-8f8e-2810a9a55afb/volumes" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.721716 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-tm82k"] Jan 27 21:43:54 crc kubenswrapper[4858]: E0127 21:43:54.723021 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ed1d5b-322e-46b9-8f8e-2810a9a55afb" containerName="container-00" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.723177 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ed1d5b-322e-46b9-8f8e-2810a9a55afb" containerName="container-00" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.723690 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ed1d5b-322e-46b9-8f8e-2810a9a55afb" containerName="container-00" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.724902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.765040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnwvt\" (UniqueName: \"kubernetes.io/projected/e61ee1e1-40e6-4868-b6e3-2f3547a22151-kube-api-access-qnwvt\") pod \"crc-debug-tm82k\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.765433 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e61ee1e1-40e6-4868-b6e3-2f3547a22151-host\") pod \"crc-debug-tm82k\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.867142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnwvt\" (UniqueName: \"kubernetes.io/projected/e61ee1e1-40e6-4868-b6e3-2f3547a22151-kube-api-access-qnwvt\") pod \"crc-debug-tm82k\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.867247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e61ee1e1-40e6-4868-b6e3-2f3547a22151-host\") pod \"crc-debug-tm82k\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.867410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e61ee1e1-40e6-4868-b6e3-2f3547a22151-host\") pod \"crc-debug-tm82k\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:54 crc kubenswrapper[4858]: I0127 21:43:54.891638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnwvt\" (UniqueName: \"kubernetes.io/projected/e61ee1e1-40e6-4868-b6e3-2f3547a22151-kube-api-access-qnwvt\") pod \"crc-debug-tm82k\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:55 crc kubenswrapper[4858]: I0127 21:43:55.046064 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:55 crc kubenswrapper[4858]: W0127 21:43:55.088593 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode61ee1e1_40e6_4868_b6e3_2f3547a22151.slice/crio-5343b170670391b9653440030ed156cdd5f6998e5344565b9c79d0311bb48396 WatchSource:0}: Error finding container 5343b170670391b9653440030ed156cdd5f6998e5344565b9c79d0311bb48396: Status 404 returned error can't find the container with id 5343b170670391b9653440030ed156cdd5f6998e5344565b9c79d0311bb48396 Jan 27 21:43:56 crc kubenswrapper[4858]: I0127 21:43:56.108850 4858 generic.go:334] "Generic (PLEG): container finished" podID="e61ee1e1-40e6-4868-b6e3-2f3547a22151" containerID="2491aa73b80df450f6d85f3f4a2ec3dcfb48404dd54bd9b4757c3fbc9464776a" exitCode=0 Jan 27 21:43:56 crc kubenswrapper[4858]: I0127 21:43:56.109112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-tm82k" event={"ID":"e61ee1e1-40e6-4868-b6e3-2f3547a22151","Type":"ContainerDied","Data":"2491aa73b80df450f6d85f3f4a2ec3dcfb48404dd54bd9b4757c3fbc9464776a"} Jan 27 21:43:56 crc kubenswrapper[4858]: I0127 21:43:56.109142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/crc-debug-tm82k" event={"ID":"e61ee1e1-40e6-4868-b6e3-2f3547a22151","Type":"ContainerStarted","Data":"5343b170670391b9653440030ed156cdd5f6998e5344565b9c79d0311bb48396"} Jan 27 21:43:56 crc kubenswrapper[4858]: I0127 21:43:56.151787 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-tm82k"] Jan 27 21:43:56 crc kubenswrapper[4858]: I0127 21:43:56.159667 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jkw9t/crc-debug-tm82k"] Jan 27 21:43:57 crc kubenswrapper[4858]: I0127 21:43:57.239483 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:43:57 crc kubenswrapper[4858]: I0127 21:43:57.335771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e61ee1e1-40e6-4868-b6e3-2f3547a22151-host\") pod \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " Jan 27 21:43:57 crc kubenswrapper[4858]: I0127 21:43:57.335935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e61ee1e1-40e6-4868-b6e3-2f3547a22151-host" (OuterVolumeSpecName: "host") pod "e61ee1e1-40e6-4868-b6e3-2f3547a22151" (UID: "e61ee1e1-40e6-4868-b6e3-2f3547a22151"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:43:57 crc kubenswrapper[4858]: I0127 21:43:57.336061 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnwvt\" (UniqueName: \"kubernetes.io/projected/e61ee1e1-40e6-4868-b6e3-2f3547a22151-kube-api-access-qnwvt\") pod \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\" (UID: \"e61ee1e1-40e6-4868-b6e3-2f3547a22151\") " Jan 27 21:43:57 crc kubenswrapper[4858]: I0127 21:43:57.336872 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e61ee1e1-40e6-4868-b6e3-2f3547a22151-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:43:57 crc kubenswrapper[4858]: I0127 21:43:57.341909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61ee1e1-40e6-4868-b6e3-2f3547a22151-kube-api-access-qnwvt" (OuterVolumeSpecName: "kube-api-access-qnwvt") pod "e61ee1e1-40e6-4868-b6e3-2f3547a22151" (UID: "e61ee1e1-40e6-4868-b6e3-2f3547a22151"). InnerVolumeSpecName "kube-api-access-qnwvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:43:57 crc kubenswrapper[4858]: I0127 21:43:57.438883 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnwvt\" (UniqueName: \"kubernetes.io/projected/e61ee1e1-40e6-4868-b6e3-2f3547a22151-kube-api-access-qnwvt\") on node \"crc\" DevicePath \"\"" Jan 27 21:43:58 crc kubenswrapper[4858]: I0127 21:43:58.083687 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61ee1e1-40e6-4868-b6e3-2f3547a22151" path="/var/lib/kubelet/pods/e61ee1e1-40e6-4868-b6e3-2f3547a22151/volumes" Jan 27 21:43:58 crc kubenswrapper[4858]: I0127 21:43:58.130943 4858 scope.go:117] "RemoveContainer" containerID="2491aa73b80df450f6d85f3f4a2ec3dcfb48404dd54bd9b4757c3fbc9464776a" Jan 27 21:43:58 crc kubenswrapper[4858]: I0127 21:43:58.131005 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/crc-debug-tm82k" Jan 27 21:44:04 crc kubenswrapper[4858]: I0127 21:44:04.071537 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:44:04 crc kubenswrapper[4858]: E0127 21:44:04.072368 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:44:16 crc kubenswrapper[4858]: I0127 21:44:16.094457 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:44:16 crc kubenswrapper[4858]: E0127 21:44:16.096288 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:44:23 crc kubenswrapper[4858]: I0127 21:44:23.871803 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69dcf58cf6-v246z_9651b951-4ad0-42ae-85fb-176da5b8ccdf/barbican-api/0.log" Jan 27 21:44:23 crc kubenswrapper[4858]: I0127 21:44:23.948968 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69dcf58cf6-v246z_9651b951-4ad0-42ae-85fb-176da5b8ccdf/barbican-api-log/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.057050 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-58f4598744-qn5jn_9927d309-f818-4163-9659-f7b6a060960e/barbican-keystone-listener/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.212035 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-58f4598744-qn5jn_9927d309-f818-4163-9659-f7b6a060960e/barbican-keystone-listener-log/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.318134 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5598668497-6nzrb_9bb04320-907b-4d35-9c41-ea828a779f5d/barbican-worker/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.423316 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5598668497-6nzrb_9bb04320-907b-4d35-9c41-ea828a779f5d/barbican-worker-log/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.522455 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj_20ffe28d-a9df-4416-85b7-c501d7555431/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.736126 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/ceilometer-notification-agent/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.750139 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/proxy-httpd/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.833842 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/ceilometer-central-agent/0.log" Jan 27 21:44:24 crc kubenswrapper[4858]: I0127 21:44:24.873567 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/sg-core/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.075830 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_29bdfd71-369f-46e8-be09-4e5b5bb22d1a/cinder-api-log/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.374686 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2106fe3e-bec7-4072-ba21-4f55b4a1b37a/probe/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.571818 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_29bdfd71-369f-46e8-be09-4e5b5bb22d1a/cinder-api/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.608706 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d853bb36-2749-40a8-9533-4caa077b1812/cinder-scheduler/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.628540 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2106fe3e-bec7-4072-ba21-4f55b4a1b37a/cinder-backup/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.730111 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d853bb36-2749-40a8-9533-4caa077b1812/probe/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.852889 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_292ec3c5-71af-43c3-8bee-e815c3876637/probe/0.log" Jan 27 21:44:25 crc kubenswrapper[4858]: I0127 21:44:25.991758 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_292ec3c5-71af-43c3-8bee-e815c3876637/cinder-volume/0.log" Jan 27 21:44:26 crc kubenswrapper[4858]: I0127 21:44:26.162335 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d/probe/0.log" Jan 27 21:44:26 crc kubenswrapper[4858]: I0127 21:44:26.247759 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d/cinder-volume/0.log" Jan 27 21:44:26 crc kubenswrapper[4858]: I0127 21:44:26.339753 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hf82v_8f14a76f-9e03-4695-98e5-c1efe11ae337/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:26 crc kubenswrapper[4858]: I0127 21:44:26.488089 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mbwng_70c1d5d9-384f-4155-b4bc-cdc9185090f0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:26 crc kubenswrapper[4858]: I0127 21:44:26.776118 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694549759-h5nzn_d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e/init/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.202541 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694549759-h5nzn_d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e/init/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.334474 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt_d59ffd9a-001c-400a-b79b-4617489956ed/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.436326 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694549759-h5nzn_d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e/dnsmasq-dns/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.626847 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_be03aee4-8299-48e7-91cb-18bbad0b2a0b/glance-log/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.661870 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_be03aee4-8299-48e7-91cb-18bbad0b2a0b/glance-httpd/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.859314 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_719be79f-34ec-4e95-b1a7-e507c6214053/glance-httpd/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.873849 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_719be79f-34ec-4e95-b1a7-e507c6214053/glance-log/0.log" Jan 27 21:44:27 crc kubenswrapper[4858]: I0127 21:44:27.966275 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57556bc8bb-j4fhs_996129af-9ae9-44ca-b677-2c27bf71847d/horizon/0.log" Jan 27 21:44:28 crc kubenswrapper[4858]: I0127 21:44:28.139778 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-d76mj_a05def4c-d0a6-4e87-8b26-8d72512941a2/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:28 crc kubenswrapper[4858]: I0127 21:44:28.353566 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lzndd_78e25299-cf17-451b-8f2f-d980ff184dac/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:28 crc kubenswrapper[4858]: I0127 21:44:28.642684 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492461-vqlt7_5b18bb1a-5b75-4c25-b553-12b03b2492a0/keystone-cron/0.log" Jan 27 21:44:28 crc kubenswrapper[4858]: I0127 21:44:28.866628 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4d832896-a304-4a89-8ef2-607eea6623e5/kube-state-metrics/0.log" Jan 27 21:44:28 crc kubenswrapper[4858]: I0127 21:44:28.873073 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57556bc8bb-j4fhs_996129af-9ae9-44ca-b677-2c27bf71847d/horizon-log/0.log" Jan 27 21:44:29 crc kubenswrapper[4858]: I0127 21:44:29.050322 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5bf568dbc7-xlg4d_2c447296-df73-4efb-b85b-dc9d468d2d80/keystone-api/0.log" Jan 27 21:44:29 crc kubenswrapper[4858]: I0127 21:44:29.080719 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d_3c49607c-dca5-4943-acbc-5c13058a99df/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:29 crc kubenswrapper[4858]: I0127 21:44:29.609508 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx_d30a30e0-0d38-4abd-8fc2-71b1ddce069a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:29 crc kubenswrapper[4858]: I0127 21:44:29.692609 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c77755cc5-ffvng_4991d38c-6548-43d3-b4b7-884b71af9f07/neutron-httpd/0.log" Jan 27 21:44:29 crc kubenswrapper[4858]: I0127 21:44:29.775965 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c77755cc5-ffvng_4991d38c-6548-43d3-b4b7-884b71af9f07/neutron-api/0.log" Jan 27 21:44:30 crc kubenswrapper[4858]: I0127 21:44:30.070563 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:44:30 crc kubenswrapper[4858]: E0127 21:44:30.070849 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:44:30 crc kubenswrapper[4858]: I0127 21:44:30.320480 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0/nova-cell0-conductor-conductor/0.log" Jan 27 21:44:30 crc kubenswrapper[4858]: I0127 21:44:30.619080 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_20379f76-6255-45c6-ba02-55526177a0c0/nova-cell1-conductor-conductor/0.log" Jan 27 21:44:31 crc kubenswrapper[4858]: I0127 21:44:31.235692 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 21:44:31 crc kubenswrapper[4858]: I0127 21:44:31.360151 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b104f9b-a37f-44bc-875f-03ce0d396c57/nova-api-log/0.log" Jan 27 21:44:31 crc kubenswrapper[4858]: I0127 21:44:31.384234 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-87t6g_cee2f5ea-c848-418b-975f-ba255506d1ae/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:31 crc kubenswrapper[4858]: I0127 21:44:31.692088 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24/nova-metadata-log/0.log" Jan 27 21:44:31 crc kubenswrapper[4858]: I0127 21:44:31.705884 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b104f9b-a37f-44bc-875f-03ce0d396c57/nova-api-api/0.log" Jan 27 21:44:32 crc kubenswrapper[4858]: I0127 21:44:32.126977 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f7f223cd-763c-408e-a3cf-067af57416af/mysql-bootstrap/0.log" Jan 27 21:44:32 crc kubenswrapper[4858]: I0127 21:44:32.307674 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6f1620a5-c040-450a-a149-e7bf421b80d9/nova-scheduler-scheduler/0.log" Jan 27 21:44:32 crc kubenswrapper[4858]: I0127 21:44:32.308423 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f7f223cd-763c-408e-a3cf-067af57416af/mysql-bootstrap/0.log" Jan 27 21:44:32 crc kubenswrapper[4858]: I0127 21:44:32.395858 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f7f223cd-763c-408e-a3cf-067af57416af/galera/0.log" Jan 27 21:44:32 crc kubenswrapper[4858]: I0127 21:44:32.572029 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4768f41e-8ff0-4cec-b741-75f8902eb0e8/mysql-bootstrap/0.log" Jan 27 21:44:32 crc kubenswrapper[4858]: I0127 21:44:32.845358 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4768f41e-8ff0-4cec-b741-75f8902eb0e8/galera/0.log" Jan 27 21:44:32 crc kubenswrapper[4858]: I0127 21:44:32.856963 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4768f41e-8ff0-4cec-b741-75f8902eb0e8/mysql-bootstrap/0.log" Jan 27 21:44:33 crc kubenswrapper[4858]: I0127 21:44:33.144754 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_892953fc-7620-4274-9f89-c86e2ec23782/openstackclient/0.log" Jan 27 21:44:33 crc kubenswrapper[4858]: I0127 21:44:33.152719 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jc5cc_d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa/ovn-controller/0.log" Jan 27 21:44:33 crc kubenswrapper[4858]: I0127 21:44:33.353475 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wzfwm_51e0f41c-e13e-41d1-bc48-71c6ef96994c/openstack-network-exporter/0.log" Jan 27 21:44:33 crc kubenswrapper[4858]: I0127 21:44:33.588939 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovsdb-server-init/0.log" Jan 27 21:44:33 crc kubenswrapper[4858]: I0127 21:44:33.790734 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24/nova-metadata-metadata/0.log" Jan 27 21:44:33 crc kubenswrapper[4858]: I0127 21:44:33.812517 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovsdb-server/0.log" Jan 27 21:44:33 crc kubenswrapper[4858]: I0127 21:44:33.884347 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovsdb-server-init/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.157285 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xlcx9_1e29e00b-0b7a-4415-a1b1-abd8aec81f9e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.233438 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovs-vswitchd/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.287507 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c57afc3-0c88-46a8-ab70-332b1a43ee7f/openstack-network-exporter/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.452844 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c57afc3-0c88-46a8-ab70-332b1a43ee7f/ovn-northd/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.514810 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b049e044-9171-4011-9c90-c334fa955321/openstack-network-exporter/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.557588 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b049e044-9171-4011-9c90-c334fa955321/ovsdbserver-nb/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.735028 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_13a7d533-55e2-4072-add8-4cd41613da8a/openstack-network-exporter/0.log" Jan 27 21:44:34 crc kubenswrapper[4858]: I0127 21:44:34.827720 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_13a7d533-55e2-4072-add8-4cd41613da8a/ovsdbserver-sb/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.274168 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9b655566-275d7_598238a6-e427-47db-b460-298627190cce/placement-api/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.388076 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/init-config-reloader/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.401079 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9b655566-275d7_598238a6-e427-47db-b460-298627190cce/placement-log/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.697368 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/prometheus/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.697510 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/thanos-sidecar/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.698209 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/init-config-reloader/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.743887 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/config-reloader/0.log" Jan 27 21:44:35 crc kubenswrapper[4858]: I0127 21:44:35.981421 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8aaed51-c0b1-4242-8d7b-a4256539e2ea/setup-container/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.174720 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8aaed51-c0b1-4242-8d7b-a4256539e2ea/rabbitmq/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.194193 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c539609-6c9e-46bc-a0d7-6a629e83ce17/setup-container/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.207717 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8aaed51-c0b1-4242-8d7b-a4256539e2ea/setup-container/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.510605 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c539609-6c9e-46bc-a0d7-6a629e83ce17/setup-container/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.582999 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c539609-6c9e-46bc-a0d7-6a629e83ce17/rabbitmq/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.592053 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e61ce5ac-61b7-41f3-aab6-c4b2e03978d1/setup-container/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.898334 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e61ce5ac-61b7-41f3-aab6-c4b2e03978d1/rabbitmq/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.949480 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e61ce5ac-61b7-41f3-aab6-c4b2e03978d1/setup-container/0.log" Jan 27 21:44:36 crc kubenswrapper[4858]: I0127 21:44:36.995081 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r79db_7a0534e5-d746-4c1e-93a0-9cd2b4f79271/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:37 crc kubenswrapper[4858]: I0127 21:44:37.223615 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d646h_dea36689-21b8-4ef7-9ead-35b516cb5f60/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:37 crc kubenswrapper[4858]: I0127 21:44:37.248603 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8454l_dfd9ae76-5a01-46af-995d-6fa271c1e3b8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:37 crc kubenswrapper[4858]: I0127 21:44:37.478009 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ftc8l_c5817364-db24-4f51-b709-6ec41b069f0b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:37 crc kubenswrapper[4858]: I0127 21:44:37.575385 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-rn685_7ea0384d-7b36-4de2-8718-58c49d6a8ef8/ssh-known-hosts-edpm-deployment/0.log" Jan 27 21:44:37 crc kubenswrapper[4858]: I0127 21:44:37.861706 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-574fc98977-sp7zp_57e04641-598d-459b-9996-0ae4182ae4fb/proxy-server/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.025701 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-574fc98977-sp7zp_57e04641-598d-459b-9996-0ae4182ae4fb/proxy-httpd/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.112957 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-msjpr_e95660bd-4df7-4b1f-8dd1-8183870d0c8e/swift-ring-rebalance/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.204946 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-auditor/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.302502 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-reaper/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.419354 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-server/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.511429 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-replicator/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.561728 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-auditor/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.573202 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-replicator/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.699863 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-server/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.724382 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bd2f70df-955b-44ba-a1be-a2f9d06a862c/memcached/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.730907 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-updater/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.817008 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-expirer/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.845885 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-auditor/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.949721 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-replicator/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.978430 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-updater/0.log" Jan 27 21:44:38 crc kubenswrapper[4858]: I0127 21:44:38.990182 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-server/0.log" Jan 27 21:44:39 crc kubenswrapper[4858]: I0127 21:44:39.045902 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/rsync/0.log" Jan 27 21:44:39 crc kubenswrapper[4858]: I0127 21:44:39.101694 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/swift-recon-cron/0.log" Jan 27 21:44:39 crc kubenswrapper[4858]: I0127 21:44:39.229373 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm_9116e36c-794b-4e0c-ad98-58f8daa17fc1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:39 crc kubenswrapper[4858]: I0127 21:44:39.313516 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0671e111-61e9-439b-9457-c29b7d18a1f7/tempest-tests-tempest-tests-runner/0.log" Jan 27 21:44:39 crc kubenswrapper[4858]: I0127 21:44:39.455107 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4bc44a79-c7d4-472d-9d17-2b69e894630f/test-operator-logs-container/0.log" Jan 27 21:44:39 crc kubenswrapper[4858]: I0127 21:44:39.496113 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk_29f6143e-1aa7-4d0f-91ce-267d3e2fe84e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.342918 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sk8g8"] Jan 27 21:44:40 crc kubenswrapper[4858]: E0127 21:44:40.343692 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61ee1e1-40e6-4868-b6e3-2f3547a22151" containerName="container-00" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.343707 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61ee1e1-40e6-4868-b6e3-2f3547a22151" containerName="container-00" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.343903 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61ee1e1-40e6-4868-b6e3-2f3547a22151" containerName="container-00" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.345311 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.353987 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk8g8"] Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.453052 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mgx\" (UniqueName: \"kubernetes.io/projected/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-kube-api-access-l8mgx\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.453170 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-utilities\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.453324 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-catalog-content\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.524978 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_16f660ae-e2f1-4e87-9e6a-83338f9228e9/watcher-applier/0.log" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.555005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8mgx\" (UniqueName: \"kubernetes.io/projected/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-kube-api-access-l8mgx\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.555107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-utilities\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.555199 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-catalog-content\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.556143 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-utilities\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.557103 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-catalog-content\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.582836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8mgx\" (UniqueName: \"kubernetes.io/projected/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-kube-api-access-l8mgx\") pod \"redhat-marketplace-sk8g8\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:40 crc kubenswrapper[4858]: I0127 21:44:40.668455 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:41 crc kubenswrapper[4858]: I0127 21:44:41.402417 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk8g8"] Jan 27 21:44:41 crc kubenswrapper[4858]: I0127 21:44:41.418442 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_49af05ef-dc73-4178-a4f8-ce9191c8fa3d/watcher-api-log/0.log" Jan 27 21:44:41 crc kubenswrapper[4858]: I0127 21:44:41.553158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk8g8" event={"ID":"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f","Type":"ContainerStarted","Data":"2ff6a7eb084e7508e3e165253148fbc3d6e9b0294859391dcd2de87fee599b65"} Jan 27 21:44:42 crc kubenswrapper[4858]: I0127 21:44:42.071631 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:44:42 crc kubenswrapper[4858]: E0127 21:44:42.071878 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:44:42 crc kubenswrapper[4858]: I0127 21:44:42.563080 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerID="239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284" exitCode=0 Jan 27 21:44:42 crc kubenswrapper[4858]: I0127 21:44:42.563121 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk8g8" event={"ID":"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f","Type":"ContainerDied","Data":"239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284"} Jan 27 21:44:43 crc kubenswrapper[4858]: I0127 21:44:43.855833 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_7bcc0d9d-f611-4dd6-96ab-41df437ab21d/watcher-decision-engine/0.log" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.461580 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_49af05ef-dc73-4178-a4f8-ce9191c8fa3d/watcher-api/0.log" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.587961 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerID="d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f" exitCode=0 Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.588045 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk8g8" event={"ID":"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f","Type":"ContainerDied","Data":"d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f"} Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.730610 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fwfsz"] Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.733529 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.747482 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwfsz"] Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.847278 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pkzb\" (UniqueName: \"kubernetes.io/projected/5627f693-e5ee-4e3e-b0b3-35dd912412c9-kube-api-access-5pkzb\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.847331 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-catalog-content\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.847368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-utilities\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.950382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-catalog-content\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.950439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-utilities\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.950624 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pkzb\" (UniqueName: \"kubernetes.io/projected/5627f693-e5ee-4e3e-b0b3-35dd912412c9-kube-api-access-5pkzb\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.951376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-utilities\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.951437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-catalog-content\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:44 crc kubenswrapper[4858]: I0127 21:44:44.971370 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pkzb\" (UniqueName: \"kubernetes.io/projected/5627f693-e5ee-4e3e-b0b3-35dd912412c9-kube-api-access-5pkzb\") pod \"community-operators-fwfsz\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:45 crc kubenswrapper[4858]: I0127 21:44:45.062912 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:45 crc kubenswrapper[4858]: I0127 21:44:45.516743 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwfsz"] Jan 27 21:44:45 crc kubenswrapper[4858]: W0127 21:44:45.529719 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5627f693_e5ee_4e3e_b0b3_35dd912412c9.slice/crio-7a2cb34fd13162fb33748208dd9f8e8b28c2d4e1f10fbd4e7f49bb00ec363e7d WatchSource:0}: Error finding container 7a2cb34fd13162fb33748208dd9f8e8b28c2d4e1f10fbd4e7f49bb00ec363e7d: Status 404 returned error can't find the container with id 7a2cb34fd13162fb33748208dd9f8e8b28c2d4e1f10fbd4e7f49bb00ec363e7d Jan 27 21:44:45 crc kubenswrapper[4858]: I0127 21:44:45.625356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwfsz" event={"ID":"5627f693-e5ee-4e3e-b0b3-35dd912412c9","Type":"ContainerStarted","Data":"7a2cb34fd13162fb33748208dd9f8e8b28c2d4e1f10fbd4e7f49bb00ec363e7d"} Jan 27 21:44:45 crc kubenswrapper[4858]: I0127 21:44:45.639695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk8g8" event={"ID":"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f","Type":"ContainerStarted","Data":"5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44"} Jan 27 21:44:46 crc kubenswrapper[4858]: I0127 21:44:46.108539 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sk8g8" podStartSLOduration=3.667860808 podStartE2EDuration="6.108518789s" podCreationTimestamp="2026-01-27 21:44:40 +0000 UTC" firstStartedPulling="2026-01-27 21:44:42.565135184 +0000 UTC m=+5827.272950890" lastFinishedPulling="2026-01-27 21:44:45.005793165 +0000 UTC m=+5829.713608871" observedRunningTime="2026-01-27 21:44:45.67164223 +0000 UTC m=+5830.379457936" watchObservedRunningTime="2026-01-27 21:44:46.108518789 +0000 UTC m=+5830.816334495" Jan 27 21:44:46 crc kubenswrapper[4858]: I0127 21:44:46.649483 4858 generic.go:334] "Generic (PLEG): container finished" podID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerID="a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5" exitCode=0 Jan 27 21:44:46 crc kubenswrapper[4858]: I0127 21:44:46.649632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwfsz" event={"ID":"5627f693-e5ee-4e3e-b0b3-35dd912412c9","Type":"ContainerDied","Data":"a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5"} Jan 27 21:44:47 crc kubenswrapper[4858]: I0127 21:44:47.659640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwfsz" event={"ID":"5627f693-e5ee-4e3e-b0b3-35dd912412c9","Type":"ContainerStarted","Data":"074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171"} Jan 27 21:44:49 crc kubenswrapper[4858]: I0127 21:44:49.681944 4858 generic.go:334] "Generic (PLEG): container finished" podID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerID="074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171" exitCode=0 Jan 27 21:44:49 crc kubenswrapper[4858]: I0127 21:44:49.681962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwfsz" event={"ID":"5627f693-e5ee-4e3e-b0b3-35dd912412c9","Type":"ContainerDied","Data":"074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171"} Jan 27 21:44:50 crc kubenswrapper[4858]: I0127 21:44:50.669077 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:50 crc kubenswrapper[4858]: I0127 21:44:50.669427 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:50 crc kubenswrapper[4858]: I0127 21:44:50.721009 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwfsz" event={"ID":"5627f693-e5ee-4e3e-b0b3-35dd912412c9","Type":"ContainerStarted","Data":"6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d"} Jan 27 21:44:50 crc kubenswrapper[4858]: I0127 21:44:50.721296 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:50 crc kubenswrapper[4858]: I0127 21:44:50.763431 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fwfsz" podStartSLOduration=3.306185022 podStartE2EDuration="6.763412166s" podCreationTimestamp="2026-01-27 21:44:44 +0000 UTC" firstStartedPulling="2026-01-27 21:44:46.651395148 +0000 UTC m=+5831.359210854" lastFinishedPulling="2026-01-27 21:44:50.108622302 +0000 UTC m=+5834.816437998" observedRunningTime="2026-01-27 21:44:50.759675541 +0000 UTC m=+5835.467491267" watchObservedRunningTime="2026-01-27 21:44:50.763412166 +0000 UTC m=+5835.471227872" Jan 27 21:44:50 crc kubenswrapper[4858]: I0127 21:44:50.774476 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:52 crc kubenswrapper[4858]: I0127 21:44:52.912399 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk8g8"] Jan 27 21:44:52 crc kubenswrapper[4858]: I0127 21:44:52.912994 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sk8g8" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="registry-server" containerID="cri-o://5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44" gracePeriod=2 Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.479748 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.563285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-utilities\") pod \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.563592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-catalog-content\") pod \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.563658 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8mgx\" (UniqueName: \"kubernetes.io/projected/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-kube-api-access-l8mgx\") pod \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\" (UID: \"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f\") " Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.564333 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-utilities" (OuterVolumeSpecName: "utilities") pod "9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" (UID: "9a61f3db-41b7-4c7b-bee8-5a8d94a9402f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.570165 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-kube-api-access-l8mgx" (OuterVolumeSpecName: "kube-api-access-l8mgx") pod "9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" (UID: "9a61f3db-41b7-4c7b-bee8-5a8d94a9402f"). InnerVolumeSpecName "kube-api-access-l8mgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.586826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" (UID: "9a61f3db-41b7-4c7b-bee8-5a8d94a9402f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.666345 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.666375 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8mgx\" (UniqueName: \"kubernetes.io/projected/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-kube-api-access-l8mgx\") on node \"crc\" DevicePath \"\"" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.666386 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.758576 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerID="5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44" exitCode=0 Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.758621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk8g8" event={"ID":"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f","Type":"ContainerDied","Data":"5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44"} Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.758648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sk8g8" event={"ID":"9a61f3db-41b7-4c7b-bee8-5a8d94a9402f","Type":"ContainerDied","Data":"2ff6a7eb084e7508e3e165253148fbc3d6e9b0294859391dcd2de87fee599b65"} Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.758643 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sk8g8" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.758744 4858 scope.go:117] "RemoveContainer" containerID="5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.780504 4858 scope.go:117] "RemoveContainer" containerID="d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.806110 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk8g8"] Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.817735 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sk8g8"] Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.823517 4858 scope.go:117] "RemoveContainer" containerID="239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.855706 4858 scope.go:117] "RemoveContainer" containerID="5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44" Jan 27 21:44:53 crc kubenswrapper[4858]: E0127 21:44:53.856139 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44\": container with ID starting with 5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44 not found: ID does not exist" containerID="5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.856179 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44"} err="failed to get container status \"5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44\": rpc error: code = NotFound desc = could not find container \"5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44\": container with ID starting with 5adc85583dbc4329fa04c623542dd98436c70d7b62435e79138e5f51953f5d44 not found: ID does not exist" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.856206 4858 scope.go:117] "RemoveContainer" containerID="d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f" Jan 27 21:44:53 crc kubenswrapper[4858]: E0127 21:44:53.856798 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f\": container with ID starting with d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f not found: ID does not exist" containerID="d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.856834 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f"} err="failed to get container status \"d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f\": rpc error: code = NotFound desc = could not find container \"d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f\": container with ID starting with d23f226bda0d8aa740620aeab2273518a5f5d34bdc0c45e6d233d2e8d5d05b5f not found: ID does not exist" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.856861 4858 scope.go:117] "RemoveContainer" containerID="239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284" Jan 27 21:44:53 crc kubenswrapper[4858]: E0127 21:44:53.857187 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284\": container with ID starting with 239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284 not found: ID does not exist" containerID="239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284" Jan 27 21:44:53 crc kubenswrapper[4858]: I0127 21:44:53.857213 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284"} err="failed to get container status \"239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284\": rpc error: code = NotFound desc = could not find container \"239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284\": container with ID starting with 239fd701c2e404a3effe00fb5c23fe5a273c43fdbe9eeff7d0fdec9c7c364284 not found: ID does not exist" Jan 27 21:44:54 crc kubenswrapper[4858]: I0127 21:44:54.091842 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" path="/var/lib/kubelet/pods/9a61f3db-41b7-4c7b-bee8-5a8d94a9402f/volumes" Jan 27 21:44:55 crc kubenswrapper[4858]: I0127 21:44:55.064151 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:55 crc kubenswrapper[4858]: I0127 21:44:55.064504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:55 crc kubenswrapper[4858]: I0127 21:44:55.119751 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:55 crc kubenswrapper[4858]: I0127 21:44:55.831930 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:56 crc kubenswrapper[4858]: I0127 21:44:56.079544 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:44:56 crc kubenswrapper[4858]: E0127 21:44:56.079808 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:44:56 crc kubenswrapper[4858]: I0127 21:44:56.103445 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwfsz"] Jan 27 21:44:57 crc kubenswrapper[4858]: I0127 21:44:57.791921 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fwfsz" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="registry-server" containerID="cri-o://6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d" gracePeriod=2 Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.364022 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.463186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-utilities\") pod \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.463517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pkzb\" (UniqueName: \"kubernetes.io/projected/5627f693-e5ee-4e3e-b0b3-35dd912412c9-kube-api-access-5pkzb\") pod \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.463826 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-catalog-content\") pod \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\" (UID: \"5627f693-e5ee-4e3e-b0b3-35dd912412c9\") " Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.464146 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-utilities" (OuterVolumeSpecName: "utilities") pod "5627f693-e5ee-4e3e-b0b3-35dd912412c9" (UID: "5627f693-e5ee-4e3e-b0b3-35dd912412c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.464544 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.486799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5627f693-e5ee-4e3e-b0b3-35dd912412c9-kube-api-access-5pkzb" (OuterVolumeSpecName: "kube-api-access-5pkzb") pod "5627f693-e5ee-4e3e-b0b3-35dd912412c9" (UID: "5627f693-e5ee-4e3e-b0b3-35dd912412c9"). InnerVolumeSpecName "kube-api-access-5pkzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.524526 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5627f693-e5ee-4e3e-b0b3-35dd912412c9" (UID: "5627f693-e5ee-4e3e-b0b3-35dd912412c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.566752 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5627f693-e5ee-4e3e-b0b3-35dd912412c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.566790 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pkzb\" (UniqueName: \"kubernetes.io/projected/5627f693-e5ee-4e3e-b0b3-35dd912412c9-kube-api-access-5pkzb\") on node \"crc\" DevicePath \"\"" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.802680 4858 generic.go:334] "Generic (PLEG): container finished" podID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerID="6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d" exitCode=0 Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.802724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwfsz" event={"ID":"5627f693-e5ee-4e3e-b0b3-35dd912412c9","Type":"ContainerDied","Data":"6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d"} Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.802756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwfsz" event={"ID":"5627f693-e5ee-4e3e-b0b3-35dd912412c9","Type":"ContainerDied","Data":"7a2cb34fd13162fb33748208dd9f8e8b28c2d4e1f10fbd4e7f49bb00ec363e7d"} Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.802773 4858 scope.go:117] "RemoveContainer" containerID="6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.802780 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwfsz" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.824777 4858 scope.go:117] "RemoveContainer" containerID="074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.839259 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwfsz"] Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.851675 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fwfsz"] Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.869807 4858 scope.go:117] "RemoveContainer" containerID="a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.912753 4858 scope.go:117] "RemoveContainer" containerID="6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d" Jan 27 21:44:58 crc kubenswrapper[4858]: E0127 21:44:58.913241 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d\": container with ID starting with 6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d not found: ID does not exist" containerID="6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.913279 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d"} err="failed to get container status \"6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d\": rpc error: code = NotFound desc = could not find container \"6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d\": container with ID starting with 6bd213e92988e2db6800d3fe823fe0281c87850e83a044839a29c5d0070c6d1d not found: ID does not exist" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.913305 4858 scope.go:117] "RemoveContainer" containerID="074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171" Jan 27 21:44:58 crc kubenswrapper[4858]: E0127 21:44:58.913773 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171\": container with ID starting with 074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171 not found: ID does not exist" containerID="074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.913798 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171"} err="failed to get container status \"074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171\": rpc error: code = NotFound desc = could not find container \"074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171\": container with ID starting with 074201d10864cb0ed7f3a103b4d73e598242d43ba44fbe9d395d92f6dfcdb171 not found: ID does not exist" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.913814 4858 scope.go:117] "RemoveContainer" containerID="a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5" Jan 27 21:44:58 crc kubenswrapper[4858]: E0127 21:44:58.914158 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5\": container with ID starting with a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5 not found: ID does not exist" containerID="a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5" Jan 27 21:44:58 crc kubenswrapper[4858]: I0127 21:44:58.914186 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5"} err="failed to get container status \"a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5\": rpc error: code = NotFound desc = could not find container \"a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5\": container with ID starting with a0bd7ff0590ad889be4c9cb0ba269b2c76cc5123a0970e1881e0669fdd49c4f5 not found: ID does not exist" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.082883 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" path="/var/lib/kubelet/pods/5627f693-e5ee-4e3e-b0b3-35dd912412c9/volumes" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.165739 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2"] Jan 27 21:45:00 crc kubenswrapper[4858]: E0127 21:45:00.166309 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="registry-server" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166332 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="registry-server" Jan 27 21:45:00 crc kubenswrapper[4858]: E0127 21:45:00.166368 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="extract-utilities" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166378 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="extract-utilities" Jan 27 21:45:00 crc kubenswrapper[4858]: E0127 21:45:00.166388 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="registry-server" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166396 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="registry-server" Jan 27 21:45:00 crc kubenswrapper[4858]: E0127 21:45:00.166410 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="extract-utilities" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166418 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="extract-utilities" Jan 27 21:45:00 crc kubenswrapper[4858]: E0127 21:45:00.166459 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="extract-content" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166469 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="extract-content" Jan 27 21:45:00 crc kubenswrapper[4858]: E0127 21:45:00.166479 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="extract-content" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166485 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="extract-content" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166759 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a61f3db-41b7-4c7b-bee8-5a8d94a9402f" containerName="registry-server" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.166787 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5627f693-e5ee-4e3e-b0b3-35dd912412c9" containerName="registry-server" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.167762 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.170635 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.170817 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.177675 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2"] Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.330651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvrg\" (UniqueName: \"kubernetes.io/projected/2e1284dc-b08a-46ba-9049-096ef10f279e-kube-api-access-5pvrg\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.330786 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1284dc-b08a-46ba-9049-096ef10f279e-config-volume\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.330828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e1284dc-b08a-46ba-9049-096ef10f279e-secret-volume\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.432965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pvrg\" (UniqueName: \"kubernetes.io/projected/2e1284dc-b08a-46ba-9049-096ef10f279e-kube-api-access-5pvrg\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.433168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1284dc-b08a-46ba-9049-096ef10f279e-config-volume\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.433237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e1284dc-b08a-46ba-9049-096ef10f279e-secret-volume\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.434803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1284dc-b08a-46ba-9049-096ef10f279e-config-volume\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.442281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e1284dc-b08a-46ba-9049-096ef10f279e-secret-volume\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.449638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pvrg\" (UniqueName: \"kubernetes.io/projected/2e1284dc-b08a-46ba-9049-096ef10f279e-kube-api-access-5pvrg\") pod \"collect-profiles-29492505-s9kh2\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:00 crc kubenswrapper[4858]: I0127 21:45:00.540513 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:01 crc kubenswrapper[4858]: I0127 21:45:01.016292 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2"] Jan 27 21:45:01 crc kubenswrapper[4858]: I0127 21:45:01.830859 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e1284dc-b08a-46ba-9049-096ef10f279e" containerID="fe6222ab05ca55ffa0a0c8537b5ddc17e77c38ad3c602131b808f4a9d4f64545" exitCode=0 Jan 27 21:45:01 crc kubenswrapper[4858]: I0127 21:45:01.830946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" event={"ID":"2e1284dc-b08a-46ba-9049-096ef10f279e","Type":"ContainerDied","Data":"fe6222ab05ca55ffa0a0c8537b5ddc17e77c38ad3c602131b808f4a9d4f64545"} Jan 27 21:45:01 crc kubenswrapper[4858]: I0127 21:45:01.831156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" event={"ID":"2e1284dc-b08a-46ba-9049-096ef10f279e","Type":"ContainerStarted","Data":"a0005c0f5c9d1832c222ce97f9aca2c8ce0f23851ad1b0082466a8194558bcc6"} Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.217229 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.290775 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e1284dc-b08a-46ba-9049-096ef10f279e-secret-volume\") pod \"2e1284dc-b08a-46ba-9049-096ef10f279e\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.290873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pvrg\" (UniqueName: \"kubernetes.io/projected/2e1284dc-b08a-46ba-9049-096ef10f279e-kube-api-access-5pvrg\") pod \"2e1284dc-b08a-46ba-9049-096ef10f279e\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.290979 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1284dc-b08a-46ba-9049-096ef10f279e-config-volume\") pod \"2e1284dc-b08a-46ba-9049-096ef10f279e\" (UID: \"2e1284dc-b08a-46ba-9049-096ef10f279e\") " Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.291684 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e1284dc-b08a-46ba-9049-096ef10f279e-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e1284dc-b08a-46ba-9049-096ef10f279e" (UID: "2e1284dc-b08a-46ba-9049-096ef10f279e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.298334 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1284dc-b08a-46ba-9049-096ef10f279e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e1284dc-b08a-46ba-9049-096ef10f279e" (UID: "2e1284dc-b08a-46ba-9049-096ef10f279e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.302799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1284dc-b08a-46ba-9049-096ef10f279e-kube-api-access-5pvrg" (OuterVolumeSpecName: "kube-api-access-5pvrg") pod "2e1284dc-b08a-46ba-9049-096ef10f279e" (UID: "2e1284dc-b08a-46ba-9049-096ef10f279e"). InnerVolumeSpecName "kube-api-access-5pvrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.394219 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pvrg\" (UniqueName: \"kubernetes.io/projected/2e1284dc-b08a-46ba-9049-096ef10f279e-kube-api-access-5pvrg\") on node \"crc\" DevicePath \"\"" Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.394265 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e1284dc-b08a-46ba-9049-096ef10f279e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:45:03 crc kubenswrapper[4858]: I0127 21:45:03.394282 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e1284dc-b08a-46ba-9049-096ef10f279e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 21:45:04 crc kubenswrapper[4858]: I0127 21:45:04.331943 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" event={"ID":"2e1284dc-b08a-46ba-9049-096ef10f279e","Type":"ContainerDied","Data":"a0005c0f5c9d1832c222ce97f9aca2c8ce0f23851ad1b0082466a8194558bcc6"} Jan 27 21:45:04 crc kubenswrapper[4858]: I0127 21:45:04.331995 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0005c0f5c9d1832c222ce97f9aca2c8ce0f23851ad1b0082466a8194558bcc6" Jan 27 21:45:04 crc kubenswrapper[4858]: I0127 21:45:04.332058 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492505-s9kh2" Jan 27 21:45:04 crc kubenswrapper[4858]: I0127 21:45:04.354860 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7"] Jan 27 21:45:04 crc kubenswrapper[4858]: I0127 21:45:04.375658 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492460-glpv7"] Jan 27 21:45:06 crc kubenswrapper[4858]: I0127 21:45:06.082572 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ac7699-9e31-4b84-99ce-403308136463" path="/var/lib/kubelet/pods/c3ac7699-9e31-4b84-99ce-403308136463/volumes" Jan 27 21:45:10 crc kubenswrapper[4858]: I0127 21:45:10.071388 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:45:10 crc kubenswrapper[4858]: E0127 21:45:10.072052 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.095862 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/util/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.257965 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/util/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.266127 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/pull/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.329507 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/pull/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.502559 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/util/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.571913 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/extract/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.594716 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/pull/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.822623 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pl99n_f2bb693c-1d95-483e-b7c5-151516bd015e/manager/0.log" Jan 27 21:45:12 crc kubenswrapper[4858]: I0127 21:45:12.869863 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-6tsd5_50605190-4834-4573-b8c9-70f5ca60b820/manager/0.log" Jan 27 21:45:13 crc kubenswrapper[4858]: I0127 21:45:13.022213 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-qlssw_fd5e8600-d46a-4463-b592-f6d6025bf66f/manager/0.log" Jan 27 21:45:13 crc kubenswrapper[4858]: I0127 21:45:13.161474 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-hg2t5_eba796fd-f7a8-4f83-9a75-7036f77d73f1/manager/0.log" Jan 27 21:45:13 crc kubenswrapper[4858]: I0127 21:45:13.256680 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-m4pbf_6074b126-8795-48bc-8984-fc25402032a2/manager/0.log" Jan 27 21:45:13 crc kubenswrapper[4858]: I0127 21:45:13.416992 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6rrnl_e86b137e-cd0c-4243-801f-dad4eb19373b/manager/0.log" Jan 27 21:45:13 crc kubenswrapper[4858]: I0127 21:45:13.729744 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-k69nl_397758a8-62c2-41ba-8177-5309d797bb2f/manager/0.log" Jan 27 21:45:13 crc kubenswrapper[4858]: I0127 21:45:13.848789 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-tskvm_f5527334-db65-4031-a24f-9aafcffb6708/manager/0.log" Jan 27 21:45:13 crc kubenswrapper[4858]: I0127 21:45:13.948884 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-dxfwr_8a5eb91f-e957-4f9d-86c9-5f8905c6bee4/manager/0.log" Jan 27 21:45:14 crc kubenswrapper[4858]: I0127 21:45:14.031803 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-cnhqv_7de86ff1-90b3-470b-bab1-344555db1153/manager/0.log" Jan 27 21:45:14 crc kubenswrapper[4858]: I0127 21:45:14.227028 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-54b92_446c00be-b860-4220-bcc1-457005d92650/manager/0.log" Jan 27 21:45:14 crc kubenswrapper[4858]: I0127 21:45:14.333098 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-m6lz4_feb30e7d-db27-4e87-ba07-f4730b228588/manager/0.log" Jan 27 21:45:14 crc kubenswrapper[4858]: I0127 21:45:14.567984 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-f7gwl_314a20ef-a97b-40a6-8a85-b118e64d9a3a/manager/0.log" Jan 27 21:45:14 crc kubenswrapper[4858]: I0127 21:45:14.615967 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-dxhnn_74b2bb8d-cae5-4033-b999-73e3ed604cb9/manager/0.log" Jan 27 21:45:14 crc kubenswrapper[4858]: I0127 21:45:14.720793 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh_c3bd5d36-c726-4b79-9c08-22bb23dabc28/manager/0.log" Jan 27 21:45:15 crc kubenswrapper[4858]: I0127 21:45:15.099876 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6f9f75d44c-9lgbg_307753ad-bb67-4220-9b56-e588037652f4/operator/0.log" Jan 27 21:45:15 crc kubenswrapper[4858]: I0127 21:45:15.425736 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-w7wsb_ba4e832d-ff36-45e9-90b9-44e125906dba/registry-server/0.log" Jan 27 21:45:15 crc kubenswrapper[4858]: I0127 21:45:15.726897 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-jrl5h_968ee010-0e16-462d-82d3-7c5d61f107a1/manager/0.log" Jan 27 21:45:15 crc kubenswrapper[4858]: I0127 21:45:15.785484 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-8st2f_c4dfc413-8d91-4a08-aef6-47188c0971c4/manager/0.log" Jan 27 21:45:16 crc kubenswrapper[4858]: I0127 21:45:16.085357 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tc2j8_9e4c347f-b102-40c1-8935-77fdef528d14/operator/0.log" Jan 27 21:45:16 crc kubenswrapper[4858]: I0127 21:45:16.290290 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-cp6lt_b778d97d-e9dc-4017-94ff-9cfd82322a3a/manager/0.log" Jan 27 21:45:16 crc kubenswrapper[4858]: I0127 21:45:16.601717 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86d6949bb8-k78rw_f75129ba-73c8-4f91-99b0-42d191fb0510/manager/0.log" Jan 27 21:45:16 crc kubenswrapper[4858]: I0127 21:45:16.637572 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-sgtcz_304980dc-cb07-41fa-ba11-1262d5a2b43b/manager/0.log" Jan 27 21:45:16 crc kubenswrapper[4858]: I0127 21:45:16.679114 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-7w8hk_c15c4bec-780c-42d1-8f36-618b255a95f6/manager/0.log" Jan 27 21:45:16 crc kubenswrapper[4858]: I0127 21:45:16.923432 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5975f685d8-snnk5_112cff1f-1841-4fe8-96e2-95d2be2957a2/manager/0.log" Jan 27 21:45:21 crc kubenswrapper[4858]: I0127 21:45:21.071619 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:45:21 crc kubenswrapper[4858]: E0127 21:45:21.072243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:45:36 crc kubenswrapper[4858]: I0127 21:45:36.080927 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:45:36 crc kubenswrapper[4858]: E0127 21:45:36.081830 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:45:37 crc kubenswrapper[4858]: I0127 21:45:37.054503 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c6zzp_01f33b82-5877-4c9d-ba44-3c6676c5f41d/control-plane-machine-set-operator/0.log" Jan 27 21:45:37 crc kubenswrapper[4858]: I0127 21:45:37.217334 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mqblw_f20c3023-909c-4904-b65a-f4627bf28119/kube-rbac-proxy/0.log" Jan 27 21:45:37 crc kubenswrapper[4858]: I0127 21:45:37.258936 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mqblw_f20c3023-909c-4904-b65a-f4627bf28119/machine-api-operator/0.log" Jan 27 21:45:51 crc kubenswrapper[4858]: I0127 21:45:51.071407 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:45:51 crc kubenswrapper[4858]: E0127 21:45:51.072243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:45:51 crc kubenswrapper[4858]: I0127 21:45:51.223176 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-n8kqf_f425f50a-9405-4c04-b320-22524d815b8a/cert-manager-controller/0.log" Jan 27 21:45:51 crc kubenswrapper[4858]: I0127 21:45:51.369034 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-x9cph_92b94f6b-96ed-4ee3-96e6-8d1c22358773/cert-manager-cainjector/0.log" Jan 27 21:45:51 crc kubenswrapper[4858]: I0127 21:45:51.473393 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-86ftq_7d2f237c-d08c-479b-a3e3-7ef983dc2c41/cert-manager-webhook/0.log" Jan 27 21:46:01 crc kubenswrapper[4858]: I0127 21:46:01.984580 4858 scope.go:117] "RemoveContainer" containerID="c14238c879cf14bd5a656c764b5626e64fa0679703813653d5b3cefcff125bfc" Jan 27 21:46:02 crc kubenswrapper[4858]: I0127 21:46:02.071724 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:46:02 crc kubenswrapper[4858]: E0127 21:46:02.072464 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:46:05 crc kubenswrapper[4858]: I0127 21:46:05.491621 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-pjzt5_8ee1edac-ca66-4ed5-a281-67b735710be5/nmstate-console-plugin/0.log" Jan 27 21:46:05 crc kubenswrapper[4858]: I0127 21:46:05.731404 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xxvgs_7bfb1746-53f8-427e-ab49-1b84279b9437/nmstate-handler/0.log" Jan 27 21:46:05 crc kubenswrapper[4858]: I0127 21:46:05.819069 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dqkn5_a9cfc031-eed0-42fd-94cc-707c19c84cae/kube-rbac-proxy/0.log" Jan 27 21:46:05 crc kubenswrapper[4858]: I0127 21:46:05.949430 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dqkn5_a9cfc031-eed0-42fd-94cc-707c19c84cae/nmstate-metrics/0.log" Jan 27 21:46:06 crc kubenswrapper[4858]: I0127 21:46:06.035474 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9z7zh_613b924b-b7a1-4507-94ed-be8377c1d87d/nmstate-operator/0.log" Jan 27 21:46:06 crc kubenswrapper[4858]: I0127 21:46:06.144484 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6bf2p_f00f3a98-58f2-445c-a008-290a987092a2/nmstate-webhook/0.log" Jan 27 21:46:14 crc kubenswrapper[4858]: I0127 21:46:14.071068 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:46:14 crc kubenswrapper[4858]: E0127 21:46:14.071858 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.008306 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sbptw"] Jan 27 21:46:19 crc kubenswrapper[4858]: E0127 21:46:19.009151 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1284dc-b08a-46ba-9049-096ef10f279e" containerName="collect-profiles" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.009163 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1284dc-b08a-46ba-9049-096ef10f279e" containerName="collect-profiles" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.009371 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1284dc-b08a-46ba-9049-096ef10f279e" containerName="collect-profiles" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.011124 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.024214 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbptw"] Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.103410 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-catalog-content\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.103710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnhb\" (UniqueName: \"kubernetes.io/projected/399b2fa3-1c0c-480b-9b00-81e367c62c79-kube-api-access-fnnhb\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.103762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-utilities\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.206135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-catalog-content\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.206302 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnhb\" (UniqueName: \"kubernetes.io/projected/399b2fa3-1c0c-480b-9b00-81e367c62c79-kube-api-access-fnnhb\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.206354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-utilities\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.206864 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-catalog-content\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.206916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-utilities\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.241079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnhb\" (UniqueName: \"kubernetes.io/projected/399b2fa3-1c0c-480b-9b00-81e367c62c79-kube-api-access-fnnhb\") pod \"redhat-operators-sbptw\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.386363 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:19 crc kubenswrapper[4858]: I0127 21:46:19.939818 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbptw"] Jan 27 21:46:20 crc kubenswrapper[4858]: I0127 21:46:20.090068 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbptw" event={"ID":"399b2fa3-1c0c-480b-9b00-81e367c62c79","Type":"ContainerStarted","Data":"e60c03d93d23fce80859223cb5f8a138ee8229316ed46fd0dc09782a3cd8509d"} Jan 27 21:46:21 crc kubenswrapper[4858]: I0127 21:46:21.100808 4858 generic.go:334] "Generic (PLEG): container finished" podID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerID="44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a" exitCode=0 Jan 27 21:46:21 crc kubenswrapper[4858]: I0127 21:46:21.100922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbptw" event={"ID":"399b2fa3-1c0c-480b-9b00-81e367c62c79","Type":"ContainerDied","Data":"44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a"} Jan 27 21:46:21 crc kubenswrapper[4858]: I0127 21:46:21.103062 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:46:21 crc kubenswrapper[4858]: I0127 21:46:21.560235 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bdznk_35e8e577-768b-425e-ae5e-74f9f4710566/prometheus-operator/0.log" Jan 27 21:46:21 crc kubenswrapper[4858]: I0127 21:46:21.752344 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh_c4c617c2-8b14-4e9c-8a40-ab1353beeb33/prometheus-operator-admission-webhook/0.log" Jan 27 21:46:21 crc kubenswrapper[4858]: I0127 21:46:21.816124 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-vk825_812a6b90-9a07-4f7f-864d-baa13b5ab210/prometheus-operator-admission-webhook/0.log" Jan 27 21:46:21 crc kubenswrapper[4858]: I0127 21:46:21.980290 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dj5bj_40809707-fd14-4599-a0ac-0bcb0c90661d/operator/0.log" Jan 27 21:46:22 crc kubenswrapper[4858]: I0127 21:46:22.084537 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nfc2q_3c0cbb64-d018-496a-a983-8c4761f142ed/perses-operator/0.log" Jan 27 21:46:23 crc kubenswrapper[4858]: I0127 21:46:23.119663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbptw" event={"ID":"399b2fa3-1c0c-480b-9b00-81e367c62c79","Type":"ContainerStarted","Data":"e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf"} Jan 27 21:46:27 crc kubenswrapper[4858]: I0127 21:46:27.071432 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:46:27 crc kubenswrapper[4858]: E0127 21:46:27.072361 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:46:27 crc kubenswrapper[4858]: I0127 21:46:27.162364 4858 generic.go:334] "Generic (PLEG): container finished" podID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerID="e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf" exitCode=0 Jan 27 21:46:27 crc kubenswrapper[4858]: I0127 21:46:27.162402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbptw" event={"ID":"399b2fa3-1c0c-480b-9b00-81e367c62c79","Type":"ContainerDied","Data":"e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf"} Jan 27 21:46:28 crc kubenswrapper[4858]: I0127 21:46:28.173118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbptw" event={"ID":"399b2fa3-1c0c-480b-9b00-81e367c62c79","Type":"ContainerStarted","Data":"ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d"} Jan 27 21:46:28 crc kubenswrapper[4858]: I0127 21:46:28.199003 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sbptw" podStartSLOduration=3.71351289 podStartE2EDuration="10.198980679s" podCreationTimestamp="2026-01-27 21:46:18 +0000 UTC" firstStartedPulling="2026-01-27 21:46:21.102778305 +0000 UTC m=+5925.810594011" lastFinishedPulling="2026-01-27 21:46:27.588246094 +0000 UTC m=+5932.296061800" observedRunningTime="2026-01-27 21:46:28.190792158 +0000 UTC m=+5932.898607884" watchObservedRunningTime="2026-01-27 21:46:28.198980679 +0000 UTC m=+5932.906796385" Jan 27 21:46:29 crc kubenswrapper[4858]: I0127 21:46:29.386621 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:29 crc kubenswrapper[4858]: I0127 21:46:29.386949 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:30 crc kubenswrapper[4858]: I0127 21:46:30.440466 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbptw" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="registry-server" probeResult="failure" output=< Jan 27 21:46:30 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:46:30 crc kubenswrapper[4858]: > Jan 27 21:46:37 crc kubenswrapper[4858]: I0127 21:46:37.563647 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-flgg7_12b83edf-4de2-4aa1-8dcd-147782a08fd4/kube-rbac-proxy/0.log" Jan 27 21:46:37 crc kubenswrapper[4858]: I0127 21:46:37.686961 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-flgg7_12b83edf-4de2-4aa1-8dcd-147782a08fd4/controller/0.log" Jan 27 21:46:37 crc kubenswrapper[4858]: I0127 21:46:37.853361 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-k8wd8_131f384a-33a7-421b-be46-51d5561a6e98/frr-k8s-webhook-server/0.log" Jan 27 21:46:37 crc kubenswrapper[4858]: I0127 21:46:37.872170 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.047182 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.086636 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.143870 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.173990 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.301477 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.311415 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.343249 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.391334 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.564674 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.590471 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.602644 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/controller/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.615241 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.756106 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/frr-metrics/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.798681 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/kube-rbac-proxy/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.833063 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/kube-rbac-proxy-frr/0.log" Jan 27 21:46:38 crc kubenswrapper[4858]: I0127 21:46:38.984589 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/reloader/0.log" Jan 27 21:46:39 crc kubenswrapper[4858]: I0127 21:46:39.104958 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c967b4747-92zgn_77b99589-63b5-4df6-b9b7-fc5335eb3463/manager/0.log" Jan 27 21:46:39 crc kubenswrapper[4858]: I0127 21:46:39.346398 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c6d8d9f7d-qxppt_079958dc-db6c-480e-90bd-1771c1c404b2/webhook-server/0.log" Jan 27 21:46:39 crc kubenswrapper[4858]: I0127 21:46:39.438973 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:39 crc kubenswrapper[4858]: I0127 21:46:39.493656 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:39 crc kubenswrapper[4858]: I0127 21:46:39.504461 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hw6th_ca2b2ed3-9750-407d-b919-fd5c6e060e0b/kube-rbac-proxy/0.log" Jan 27 21:46:39 crc kubenswrapper[4858]: I0127 21:46:39.688458 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbptw"] Jan 27 21:46:40 crc kubenswrapper[4858]: I0127 21:46:40.111664 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hw6th_ca2b2ed3-9750-407d-b919-fd5c6e060e0b/speaker/0.log" Jan 27 21:46:40 crc kubenswrapper[4858]: I0127 21:46:40.562134 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/frr/0.log" Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.070675 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:46:41 crc kubenswrapper[4858]: E0127 21:46:41.071210 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.310644 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sbptw" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="registry-server" containerID="cri-o://ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d" gracePeriod=2 Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.871644 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.948183 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-utilities\") pod \"399b2fa3-1c0c-480b-9b00-81e367c62c79\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.948244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnnhb\" (UniqueName: \"kubernetes.io/projected/399b2fa3-1c0c-480b-9b00-81e367c62c79-kube-api-access-fnnhb\") pod \"399b2fa3-1c0c-480b-9b00-81e367c62c79\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.948448 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-catalog-content\") pod \"399b2fa3-1c0c-480b-9b00-81e367c62c79\" (UID: \"399b2fa3-1c0c-480b-9b00-81e367c62c79\") " Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.949103 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-utilities" (OuterVolumeSpecName: "utilities") pod "399b2fa3-1c0c-480b-9b00-81e367c62c79" (UID: "399b2fa3-1c0c-480b-9b00-81e367c62c79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:46:41 crc kubenswrapper[4858]: I0127 21:46:41.956736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399b2fa3-1c0c-480b-9b00-81e367c62c79-kube-api-access-fnnhb" (OuterVolumeSpecName: "kube-api-access-fnnhb") pod "399b2fa3-1c0c-480b-9b00-81e367c62c79" (UID: "399b2fa3-1c0c-480b-9b00-81e367c62c79"). InnerVolumeSpecName "kube-api-access-fnnhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.050487 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.050516 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnnhb\" (UniqueName: \"kubernetes.io/projected/399b2fa3-1c0c-480b-9b00-81e367c62c79-kube-api-access-fnnhb\") on node \"crc\" DevicePath \"\"" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.082254 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "399b2fa3-1c0c-480b-9b00-81e367c62c79" (UID: "399b2fa3-1c0c-480b-9b00-81e367c62c79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.152885 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/399b2fa3-1c0c-480b-9b00-81e367c62c79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.325639 4858 generic.go:334] "Generic (PLEG): container finished" podID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerID="ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d" exitCode=0 Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.325752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbptw" event={"ID":"399b2fa3-1c0c-480b-9b00-81e367c62c79","Type":"ContainerDied","Data":"ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d"} Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.326108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbptw" event={"ID":"399b2fa3-1c0c-480b-9b00-81e367c62c79","Type":"ContainerDied","Data":"e60c03d93d23fce80859223cb5f8a138ee8229316ed46fd0dc09782a3cd8509d"} Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.326147 4858 scope.go:117] "RemoveContainer" containerID="ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.325857 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbptw" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.358025 4858 scope.go:117] "RemoveContainer" containerID="e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.373140 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbptw"] Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.378843 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sbptw"] Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.388015 4858 scope.go:117] "RemoveContainer" containerID="44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.437923 4858 scope.go:117] "RemoveContainer" containerID="ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d" Jan 27 21:46:42 crc kubenswrapper[4858]: E0127 21:46:42.438687 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d\": container with ID starting with ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d not found: ID does not exist" containerID="ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.438720 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d"} err="failed to get container status \"ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d\": rpc error: code = NotFound desc = could not find container \"ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d\": container with ID starting with ee77fd4d2163adf9cf2140c0834dd3b5262df86f38a19e83457accd42c8d887d not found: ID does not exist" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.438741 4858 scope.go:117] "RemoveContainer" containerID="e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf" Jan 27 21:46:42 crc kubenswrapper[4858]: E0127 21:46:42.440309 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf\": container with ID starting with e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf not found: ID does not exist" containerID="e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.440336 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf"} err="failed to get container status \"e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf\": rpc error: code = NotFound desc = could not find container \"e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf\": container with ID starting with e8b0aa0d098322df9b1c1a1f923926340782925af9e280584f4111ad46547edf not found: ID does not exist" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.440352 4858 scope.go:117] "RemoveContainer" containerID="44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a" Jan 27 21:46:42 crc kubenswrapper[4858]: E0127 21:46:42.440961 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a\": container with ID starting with 44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a not found: ID does not exist" containerID="44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a" Jan 27 21:46:42 crc kubenswrapper[4858]: I0127 21:46:42.441006 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a"} err="failed to get container status \"44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a\": rpc error: code = NotFound desc = could not find container \"44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a\": container with ID starting with 44c61e1d7785fd1392133b3f449b8ff23df94dce3766299701f158c1aa71869a not found: ID does not exist" Jan 27 21:46:44 crc kubenswrapper[4858]: I0127 21:46:44.097842 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" path="/var/lib/kubelet/pods/399b2fa3-1c0c-480b-9b00-81e367c62c79/volumes" Jan 27 21:46:53 crc kubenswrapper[4858]: I0127 21:46:53.654848 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/util/0.log" Jan 27 21:46:53 crc kubenswrapper[4858]: I0127 21:46:53.808776 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/util/0.log" Jan 27 21:46:53 crc kubenswrapper[4858]: I0127 21:46:53.831693 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/pull/0.log" Jan 27 21:46:53 crc kubenswrapper[4858]: I0127 21:46:53.836687 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/pull/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.017423 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/util/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.045537 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/pull/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.046485 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/extract/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.073029 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:46:54 crc kubenswrapper[4858]: E0127 21:46:54.073254 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.263187 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/util/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.434192 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/pull/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.451507 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/util/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.469817 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/pull/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.660776 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/pull/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.663785 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/util/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.705186 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/extract/0.log" Jan 27 21:46:54 crc kubenswrapper[4858]: I0127 21:46:54.817274 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/util/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.022836 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/pull/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.041126 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/util/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.047534 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/pull/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.210684 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/util/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.217409 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/extract/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.219454 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/pull/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.396212 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-utilities/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.561723 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-utilities/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.590792 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-content/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.596857 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-content/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.808036 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-utilities/0.log" Jan 27 21:46:55 crc kubenswrapper[4858]: I0127 21:46:55.810302 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-content/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.039460 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-utilities/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.251464 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-content/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.265343 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-utilities/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.331590 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-content/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.482697 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-utilities/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.504195 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/registry-server/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.548730 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-content/0.log" Jan 27 21:46:56 crc kubenswrapper[4858]: I0127 21:46:56.813389 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qlw92_35d290ca-2486-41c6-9a0e-0b905e2994bb/marketplace-operator/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.059435 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-utilities/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.284343 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-content/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.335980 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-content/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.347809 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-utilities/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.436309 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/registry-server/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.499818 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-content/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.518278 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-utilities/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.672136 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-utilities/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.824051 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/registry-server/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.872217 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-utilities/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.889411 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-content/0.log" Jan 27 21:46:57 crc kubenswrapper[4858]: I0127 21:46:57.928987 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-content/0.log" Jan 27 21:46:58 crc kubenswrapper[4858]: I0127 21:46:58.101229 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-utilities/0.log" Jan 27 21:46:58 crc kubenswrapper[4858]: I0127 21:46:58.128900 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-content/0.log" Jan 27 21:46:58 crc kubenswrapper[4858]: I0127 21:46:58.858572 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/registry-server/0.log" Jan 27 21:47:09 crc kubenswrapper[4858]: I0127 21:47:09.072059 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:47:09 crc kubenswrapper[4858]: E0127 21:47:09.073120 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:47:11 crc kubenswrapper[4858]: I0127 21:47:11.418758 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bdznk_35e8e577-768b-425e-ae5e-74f9f4710566/prometheus-operator/0.log" Jan 27 21:47:11 crc kubenswrapper[4858]: I0127 21:47:11.458531 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-vk825_812a6b90-9a07-4f7f-864d-baa13b5ab210/prometheus-operator-admission-webhook/0.log" Jan 27 21:47:11 crc kubenswrapper[4858]: I0127 21:47:11.495449 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh_c4c617c2-8b14-4e9c-8a40-ab1353beeb33/prometheus-operator-admission-webhook/0.log" Jan 27 21:47:11 crc kubenswrapper[4858]: I0127 21:47:11.613914 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dj5bj_40809707-fd14-4599-a0ac-0bcb0c90661d/operator/0.log" Jan 27 21:47:11 crc kubenswrapper[4858]: I0127 21:47:11.659233 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nfc2q_3c0cbb64-d018-496a-a983-8c4761f142ed/perses-operator/0.log" Jan 27 21:47:24 crc kubenswrapper[4858]: I0127 21:47:24.071887 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:47:24 crc kubenswrapper[4858]: E0127 21:47:24.072594 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:47:38 crc kubenswrapper[4858]: I0127 21:47:38.071705 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:47:38 crc kubenswrapper[4858]: I0127 21:47:38.887627 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"329130a47c7e1d937f1974012a1991feff790eda04f6fd53757ddd5c0aec43b1"} Jan 27 21:48:02 crc kubenswrapper[4858]: I0127 21:48:02.112805 4858 scope.go:117] "RemoveContainer" containerID="c6135955f664cb20028ebf64edb56188b278fe5640c39055619f050ab7a242db" Jan 27 21:48:02 crc kubenswrapper[4858]: I0127 21:48:02.174390 4858 scope.go:117] "RemoveContainer" containerID="08f0181cd6c56467906e2a792071c15874a040a9b572cad34e711d66f1cb4eb0" Jan 27 21:48:02 crc kubenswrapper[4858]: I0127 21:48:02.205889 4858 scope.go:117] "RemoveContainer" containerID="b452035effb2480ba78e76a4bbb9b78f6e1f13ff7b498b7c6437c14c9c85703b" Jan 27 21:49:02 crc kubenswrapper[4858]: I0127 21:49:02.274409 4858 scope.go:117] "RemoveContainer" containerID="9ac83be933e902d43b6b9dbb1e5baad37ed239af1ed08d81f8bb9e107e35bd92" Jan 27 21:49:18 crc kubenswrapper[4858]: I0127 21:49:18.041589 4858 generic.go:334] "Generic (PLEG): container finished" podID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerID="aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76" exitCode=0 Jan 27 21:49:18 crc kubenswrapper[4858]: I0127 21:49:18.041739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jkw9t/must-gather-l29cg" event={"ID":"c5d56ed1-a59f-43b9-a3fa-0148df4f8909","Type":"ContainerDied","Data":"aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76"} Jan 27 21:49:18 crc kubenswrapper[4858]: I0127 21:49:18.043342 4858 scope.go:117] "RemoveContainer" containerID="aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76" Jan 27 21:49:18 crc kubenswrapper[4858]: I0127 21:49:18.413303 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jkw9t_must-gather-l29cg_c5d56ed1-a59f-43b9-a3fa-0148df4f8909/gather/0.log" Jan 27 21:49:27 crc kubenswrapper[4858]: I0127 21:49:27.672354 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jkw9t/must-gather-l29cg"] Jan 27 21:49:27 crc kubenswrapper[4858]: I0127 21:49:27.673202 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jkw9t/must-gather-l29cg" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerName="copy" containerID="cri-o://c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc" gracePeriod=2 Jan 27 21:49:27 crc kubenswrapper[4858]: I0127 21:49:27.681691 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jkw9t/must-gather-l29cg"] Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.146252 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jkw9t_must-gather-l29cg_c5d56ed1-a59f-43b9-a3fa-0148df4f8909/copy/0.log" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.147141 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.178266 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jkw9t_must-gather-l29cg_c5d56ed1-a59f-43b9-a3fa-0148df4f8909/copy/0.log" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.178584 4858 generic.go:334] "Generic (PLEG): container finished" podID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerID="c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc" exitCode=143 Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.178633 4858 scope.go:117] "RemoveContainer" containerID="c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.178740 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jkw9t/must-gather-l29cg" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.197738 4858 scope.go:117] "RemoveContainer" containerID="aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.212169 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-must-gather-output\") pod \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.212421 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwzqs\" (UniqueName: \"kubernetes.io/projected/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-kube-api-access-vwzqs\") pod \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\" (UID: \"c5d56ed1-a59f-43b9-a3fa-0148df4f8909\") " Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.219632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-kube-api-access-vwzqs" (OuterVolumeSpecName: "kube-api-access-vwzqs") pod "c5d56ed1-a59f-43b9-a3fa-0148df4f8909" (UID: "c5d56ed1-a59f-43b9-a3fa-0148df4f8909"). InnerVolumeSpecName "kube-api-access-vwzqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.280835 4858 scope.go:117] "RemoveContainer" containerID="c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc" Jan 27 21:49:28 crc kubenswrapper[4858]: E0127 21:49:28.281306 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc\": container with ID starting with c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc not found: ID does not exist" containerID="c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.281340 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc"} err="failed to get container status \"c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc\": rpc error: code = NotFound desc = could not find container \"c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc\": container with ID starting with c6ed15db398050e0fe2525192ad08faa33ffe9b666c4b779ac6ecdff28ffb9cc not found: ID does not exist" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.281359 4858 scope.go:117] "RemoveContainer" containerID="aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76" Jan 27 21:49:28 crc kubenswrapper[4858]: E0127 21:49:28.281860 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76\": container with ID starting with aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76 not found: ID does not exist" containerID="aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.281899 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76"} err="failed to get container status \"aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76\": rpc error: code = NotFound desc = could not find container \"aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76\": container with ID starting with aa9e6c8c266908d2fbf174e33046f52791a64f8d8a14cddc5c2bff110e794d76 not found: ID does not exist" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.315917 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwzqs\" (UniqueName: \"kubernetes.io/projected/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-kube-api-access-vwzqs\") on node \"crc\" DevicePath \"\"" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.433670 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c5d56ed1-a59f-43b9-a3fa-0148df4f8909" (UID: "c5d56ed1-a59f-43b9-a3fa-0148df4f8909"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:49:28 crc kubenswrapper[4858]: I0127 21:49:28.520267 4858 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5d56ed1-a59f-43b9-a3fa-0148df4f8909-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 21:49:30 crc kubenswrapper[4858]: I0127 21:49:30.084665 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" path="/var/lib/kubelet/pods/c5d56ed1-a59f-43b9-a3fa-0148df4f8909/volumes" Jan 27 21:49:59 crc kubenswrapper[4858]: I0127 21:49:59.329011 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:49:59 crc kubenswrapper[4858]: I0127 21:49:59.329590 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:50:02 crc kubenswrapper[4858]: I0127 21:50:02.349764 4858 scope.go:117] "RemoveContainer" containerID="4420a1f1419bbe490bc365667eb59d3e64e4d1dfb583748b1462caa36155d6d6" Jan 27 21:50:29 crc kubenswrapper[4858]: I0127 21:50:29.328736 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:50:29 crc kubenswrapper[4858]: I0127 21:50:29.329335 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:50:59 crc kubenswrapper[4858]: I0127 21:50:59.329353 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:50:59 crc kubenswrapper[4858]: I0127 21:50:59.329906 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:50:59 crc kubenswrapper[4858]: I0127 21:50:59.329956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:50:59 crc kubenswrapper[4858]: I0127 21:50:59.330589 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"329130a47c7e1d937f1974012a1991feff790eda04f6fd53757ddd5c0aec43b1"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:50:59 crc kubenswrapper[4858]: I0127 21:50:59.330665 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://329130a47c7e1d937f1974012a1991feff790eda04f6fd53757ddd5c0aec43b1" gracePeriod=600 Jan 27 21:51:00 crc kubenswrapper[4858]: I0127 21:51:00.089274 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="329130a47c7e1d937f1974012a1991feff790eda04f6fd53757ddd5c0aec43b1" exitCode=0 Jan 27 21:51:00 crc kubenswrapper[4858]: I0127 21:51:00.089322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"329130a47c7e1d937f1974012a1991feff790eda04f6fd53757ddd5c0aec43b1"} Jan 27 21:51:00 crc kubenswrapper[4858]: I0127 21:51:00.090313 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5"} Jan 27 21:51:00 crc kubenswrapper[4858]: I0127 21:51:00.090378 4858 scope.go:117] "RemoveContainer" containerID="3ef14a406ea983a4002068e06f82ecd6390f94f6ba4073b072f146884bd28bd5" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.237665 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qgwxv"] Jan 27 21:51:08 crc kubenswrapper[4858]: E0127 21:51:08.238759 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="registry-server" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.238775 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="registry-server" Jan 27 21:51:08 crc kubenswrapper[4858]: E0127 21:51:08.238805 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="extract-content" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.238813 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="extract-content" Jan 27 21:51:08 crc kubenswrapper[4858]: E0127 21:51:08.238832 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerName="copy" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.238840 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerName="copy" Jan 27 21:51:08 crc kubenswrapper[4858]: E0127 21:51:08.238865 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="extract-utilities" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.238874 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="extract-utilities" Jan 27 21:51:08 crc kubenswrapper[4858]: E0127 21:51:08.238893 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerName="gather" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.238901 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerName="gather" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.239164 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="399b2fa3-1c0c-480b-9b00-81e367c62c79" containerName="registry-server" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.239192 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerName="copy" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.239208 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d56ed1-a59f-43b9-a3fa-0148df4f8909" containerName="gather" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.241108 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.252669 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qgwxv"] Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.394152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-catalog-content\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.394460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-utilities\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.394602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpkzh\" (UniqueName: \"kubernetes.io/projected/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-kube-api-access-dpkzh\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.496812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-utilities\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.497305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-utilities\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.497352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpkzh\" (UniqueName: \"kubernetes.io/projected/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-kube-api-access-dpkzh\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.497492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-catalog-content\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.497843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-catalog-content\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.526808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpkzh\" (UniqueName: \"kubernetes.io/projected/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-kube-api-access-dpkzh\") pod \"certified-operators-qgwxv\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:08 crc kubenswrapper[4858]: I0127 21:51:08.579066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:09 crc kubenswrapper[4858]: I0127 21:51:09.131777 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qgwxv"] Jan 27 21:51:09 crc kubenswrapper[4858]: W0127 21:51:09.140286 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b30d0b_8d2c_4fe2_b6ba_282a2cd286d3.slice/crio-b0795bc6f3f008df778e8c59e84f7f90fd96688843ebf03ae024b8dd0947812c WatchSource:0}: Error finding container b0795bc6f3f008df778e8c59e84f7f90fd96688843ebf03ae024b8dd0947812c: Status 404 returned error can't find the container with id b0795bc6f3f008df778e8c59e84f7f90fd96688843ebf03ae024b8dd0947812c Jan 27 21:51:09 crc kubenswrapper[4858]: I0127 21:51:09.200880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgwxv" event={"ID":"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3","Type":"ContainerStarted","Data":"b0795bc6f3f008df778e8c59e84f7f90fd96688843ebf03ae024b8dd0947812c"} Jan 27 21:51:10 crc kubenswrapper[4858]: I0127 21:51:10.230189 4858 generic.go:334] "Generic (PLEG): container finished" podID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerID="0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759" exitCode=0 Jan 27 21:51:10 crc kubenswrapper[4858]: I0127 21:51:10.230426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgwxv" event={"ID":"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3","Type":"ContainerDied","Data":"0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759"} Jan 27 21:51:12 crc kubenswrapper[4858]: I0127 21:51:12.258912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgwxv" event={"ID":"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3","Type":"ContainerStarted","Data":"1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2"} Jan 27 21:51:13 crc kubenswrapper[4858]: I0127 21:51:13.270684 4858 generic.go:334] "Generic (PLEG): container finished" podID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerID="1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2" exitCode=0 Jan 27 21:51:13 crc kubenswrapper[4858]: I0127 21:51:13.270784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgwxv" event={"ID":"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3","Type":"ContainerDied","Data":"1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2"} Jan 27 21:51:14 crc kubenswrapper[4858]: I0127 21:51:14.282157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgwxv" event={"ID":"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3","Type":"ContainerStarted","Data":"1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578"} Jan 27 21:51:14 crc kubenswrapper[4858]: I0127 21:51:14.312803 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qgwxv" podStartSLOduration=2.848990091 podStartE2EDuration="6.312775524s" podCreationTimestamp="2026-01-27 21:51:08 +0000 UTC" firstStartedPulling="2026-01-27 21:51:10.235180394 +0000 UTC m=+6214.942996130" lastFinishedPulling="2026-01-27 21:51:13.698965847 +0000 UTC m=+6218.406781563" observedRunningTime="2026-01-27 21:51:14.300254618 +0000 UTC m=+6219.008070324" watchObservedRunningTime="2026-01-27 21:51:14.312775524 +0000 UTC m=+6219.020591230" Jan 27 21:51:18 crc kubenswrapper[4858]: I0127 21:51:18.579262 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:18 crc kubenswrapper[4858]: I0127 21:51:18.580757 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:18 crc kubenswrapper[4858]: I0127 21:51:18.634501 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:19 crc kubenswrapper[4858]: I0127 21:51:19.438719 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:19 crc kubenswrapper[4858]: I0127 21:51:19.484963 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qgwxv"] Jan 27 21:51:21 crc kubenswrapper[4858]: I0127 21:51:21.384028 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qgwxv" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="registry-server" containerID="cri-o://1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578" gracePeriod=2 Jan 27 21:51:21 crc kubenswrapper[4858]: I0127 21:51:21.985934 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.136918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-catalog-content\") pod \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.137021 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-utilities\") pod \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.137099 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpkzh\" (UniqueName: \"kubernetes.io/projected/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-kube-api-access-dpkzh\") pod \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\" (UID: \"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3\") " Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.138002 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-utilities" (OuterVolumeSpecName: "utilities") pod "e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" (UID: "e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.146371 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-kube-api-access-dpkzh" (OuterVolumeSpecName: "kube-api-access-dpkzh") pod "e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" (UID: "e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3"). InnerVolumeSpecName "kube-api-access-dpkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.239596 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.239628 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpkzh\" (UniqueName: \"kubernetes.io/projected/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-kube-api-access-dpkzh\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.397813 4858 generic.go:334] "Generic (PLEG): container finished" podID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerID="1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578" exitCode=0 Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.397861 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgwxv" event={"ID":"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3","Type":"ContainerDied","Data":"1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578"} Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.397893 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qgwxv" event={"ID":"e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3","Type":"ContainerDied","Data":"b0795bc6f3f008df778e8c59e84f7f90fd96688843ebf03ae024b8dd0947812c"} Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.397896 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qgwxv" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.397916 4858 scope.go:117] "RemoveContainer" containerID="1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.435970 4858 scope.go:117] "RemoveContainer" containerID="1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.450915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" (UID: "e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.463630 4858 scope.go:117] "RemoveContainer" containerID="0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.535630 4858 scope.go:117] "RemoveContainer" containerID="1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578" Jan 27 21:51:22 crc kubenswrapper[4858]: E0127 21:51:22.536267 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578\": container with ID starting with 1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578 not found: ID does not exist" containerID="1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.536323 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578"} err="failed to get container status \"1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578\": rpc error: code = NotFound desc = could not find container \"1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578\": container with ID starting with 1ce9c273ceb5d6e9f6b2ce156cecfab3e4f2bd73d5f066f2f55b1f63300f1578 not found: ID does not exist" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.536356 4858 scope.go:117] "RemoveContainer" containerID="1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2" Jan 27 21:51:22 crc kubenswrapper[4858]: E0127 21:51:22.537100 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2\": container with ID starting with 1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2 not found: ID does not exist" containerID="1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.537174 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2"} err="failed to get container status \"1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2\": rpc error: code = NotFound desc = could not find container \"1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2\": container with ID starting with 1f4aac358c73d1b6dae6f0aee8c4cd86cdd481051bb67e4808610e7f764c3ad2 not found: ID does not exist" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.537219 4858 scope.go:117] "RemoveContainer" containerID="0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759" Jan 27 21:51:22 crc kubenswrapper[4858]: E0127 21:51:22.537645 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759\": container with ID starting with 0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759 not found: ID does not exist" containerID="0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.537748 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759"} err="failed to get container status \"0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759\": rpc error: code = NotFound desc = could not find container \"0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759\": container with ID starting with 0292c94cb9a686726aeed9ff3acc73fcc45166e9c4329047c3f088eec68de759 not found: ID does not exist" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.547001 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.745472 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qgwxv"] Jan 27 21:51:22 crc kubenswrapper[4858]: I0127 21:51:22.754899 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qgwxv"] Jan 27 21:51:24 crc kubenswrapper[4858]: I0127 21:51:24.088123 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" path="/var/lib/kubelet/pods/e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3/volumes" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.285911 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5r5ms/must-gather-m54br"] Jan 27 21:52:46 crc kubenswrapper[4858]: E0127 21:52:46.286901 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="extract-utilities" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.286919 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="extract-utilities" Jan 27 21:52:46 crc kubenswrapper[4858]: E0127 21:52:46.286936 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="extract-content" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.286942 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="extract-content" Jan 27 21:52:46 crc kubenswrapper[4858]: E0127 21:52:46.286960 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="registry-server" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.286969 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="registry-server" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.287238 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b30d0b-8d2c-4fe2-b6ba-282a2cd286d3" containerName="registry-server" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.288540 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.290263 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5r5ms"/"kube-root-ca.crt" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.290496 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5r5ms"/"openshift-service-ca.crt" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.307709 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5r5ms"/"default-dockercfg-27xdr" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.320761 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5r5ms/must-gather-m54br"] Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.434034 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/db47a894-c924-4cfe-b655-7da395bff4b4-must-gather-output\") pod \"must-gather-m54br\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.434115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhwx\" (UniqueName: \"kubernetes.io/projected/db47a894-c924-4cfe-b655-7da395bff4b4-kube-api-access-xlhwx\") pod \"must-gather-m54br\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.536088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/db47a894-c924-4cfe-b655-7da395bff4b4-must-gather-output\") pod \"must-gather-m54br\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.536193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhwx\" (UniqueName: \"kubernetes.io/projected/db47a894-c924-4cfe-b655-7da395bff4b4-kube-api-access-xlhwx\") pod \"must-gather-m54br\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.536604 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/db47a894-c924-4cfe-b655-7da395bff4b4-must-gather-output\") pod \"must-gather-m54br\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.559466 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhwx\" (UniqueName: \"kubernetes.io/projected/db47a894-c924-4cfe-b655-7da395bff4b4-kube-api-access-xlhwx\") pod \"must-gather-m54br\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:46 crc kubenswrapper[4858]: I0127 21:52:46.605161 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:52:47 crc kubenswrapper[4858]: I0127 21:52:47.307217 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5r5ms/must-gather-m54br"] Jan 27 21:52:47 crc kubenswrapper[4858]: I0127 21:52:47.364393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/must-gather-m54br" event={"ID":"db47a894-c924-4cfe-b655-7da395bff4b4","Type":"ContainerStarted","Data":"84cce7449a8f45e8321541313fedd6e55d3ff07cd674f0963c3ac3d81ae90d08"} Jan 27 21:52:48 crc kubenswrapper[4858]: I0127 21:52:48.375764 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/must-gather-m54br" event={"ID":"db47a894-c924-4cfe-b655-7da395bff4b4","Type":"ContainerStarted","Data":"0e3f089703b58543601a876c2c30be13fe8bcb1f2d45a8a355d285ed4b8e6f24"} Jan 27 21:52:48 crc kubenswrapper[4858]: I0127 21:52:48.376083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/must-gather-m54br" event={"ID":"db47a894-c924-4cfe-b655-7da395bff4b4","Type":"ContainerStarted","Data":"0b01c3f9a7c6131e437e2ded93569bda22a78fb9420ae52b0f97ea1c26985254"} Jan 27 21:52:48 crc kubenswrapper[4858]: I0127 21:52:48.391005 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5r5ms/must-gather-m54br" podStartSLOduration=2.390979677 podStartE2EDuration="2.390979677s" podCreationTimestamp="2026-01-27 21:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:48.388339032 +0000 UTC m=+6313.096154788" watchObservedRunningTime="2026-01-27 21:52:48.390979677 +0000 UTC m=+6313.098795413" Jan 27 21:52:50 crc kubenswrapper[4858]: E0127 21:52:50.201155 4858 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.56:37024->38.129.56.56:42269: write tcp 38.129.56.56:37024->38.129.56.56:42269: write: broken pipe Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.545235 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-hhlvx"] Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.547191 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.741351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg772\" (UniqueName: \"kubernetes.io/projected/157fe3c4-8421-43e6-ba76-2ed8eaba0032-kube-api-access-sg772\") pod \"crc-debug-hhlvx\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.741575 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/157fe3c4-8421-43e6-ba76-2ed8eaba0032-host\") pod \"crc-debug-hhlvx\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.843038 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/157fe3c4-8421-43e6-ba76-2ed8eaba0032-host\") pod \"crc-debug-hhlvx\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.843197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg772\" (UniqueName: \"kubernetes.io/projected/157fe3c4-8421-43e6-ba76-2ed8eaba0032-kube-api-access-sg772\") pod \"crc-debug-hhlvx\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.843796 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/157fe3c4-8421-43e6-ba76-2ed8eaba0032-host\") pod \"crc-debug-hhlvx\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:51 crc kubenswrapper[4858]: I0127 21:52:51.873647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg772\" (UniqueName: \"kubernetes.io/projected/157fe3c4-8421-43e6-ba76-2ed8eaba0032-kube-api-access-sg772\") pod \"crc-debug-hhlvx\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:52 crc kubenswrapper[4858]: I0127 21:52:52.170179 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:52:52 crc kubenswrapper[4858]: W0127 21:52:52.217917 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod157fe3c4_8421_43e6_ba76_2ed8eaba0032.slice/crio-a3a6fb32989f3255c461d15edd419447f4ed5c3b63c40a1dbb99b7439b985161 WatchSource:0}: Error finding container a3a6fb32989f3255c461d15edd419447f4ed5c3b63c40a1dbb99b7439b985161: Status 404 returned error can't find the container with id a3a6fb32989f3255c461d15edd419447f4ed5c3b63c40a1dbb99b7439b985161 Jan 27 21:52:52 crc kubenswrapper[4858]: I0127 21:52:52.412229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" event={"ID":"157fe3c4-8421-43e6-ba76-2ed8eaba0032","Type":"ContainerStarted","Data":"a3a6fb32989f3255c461d15edd419447f4ed5c3b63c40a1dbb99b7439b985161"} Jan 27 21:52:53 crc kubenswrapper[4858]: I0127 21:52:53.422317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" event={"ID":"157fe3c4-8421-43e6-ba76-2ed8eaba0032","Type":"ContainerStarted","Data":"ad0755a3857787ace223d48fe823e8c71465386c7dfed4da451f2c46421d266c"} Jan 27 21:52:53 crc kubenswrapper[4858]: I0127 21:52:53.439769 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" podStartSLOduration=2.439743837 podStartE2EDuration="2.439743837s" podCreationTimestamp="2026-01-27 21:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:52:53.436052592 +0000 UTC m=+6318.143868308" watchObservedRunningTime="2026-01-27 21:52:53.439743837 +0000 UTC m=+6318.147559553" Jan 27 21:52:59 crc kubenswrapper[4858]: I0127 21:52:59.328537 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:52:59 crc kubenswrapper[4858]: I0127 21:52:59.329110 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:53:29 crc kubenswrapper[4858]: I0127 21:53:29.329113 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:53:29 crc kubenswrapper[4858]: I0127 21:53:29.329595 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:53:34 crc kubenswrapper[4858]: I0127 21:53:34.057312 4858 generic.go:334] "Generic (PLEG): container finished" podID="157fe3c4-8421-43e6-ba76-2ed8eaba0032" containerID="ad0755a3857787ace223d48fe823e8c71465386c7dfed4da451f2c46421d266c" exitCode=0 Jan 27 21:53:34 crc kubenswrapper[4858]: I0127 21:53:34.057382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" event={"ID":"157fe3c4-8421-43e6-ba76-2ed8eaba0032","Type":"ContainerDied","Data":"ad0755a3857787ace223d48fe823e8c71465386c7dfed4da451f2c46421d266c"} Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.184076 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.221454 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-hhlvx"] Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.230284 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-hhlvx"] Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.294470 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/157fe3c4-8421-43e6-ba76-2ed8eaba0032-host\") pod \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.294803 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg772\" (UniqueName: \"kubernetes.io/projected/157fe3c4-8421-43e6-ba76-2ed8eaba0032-kube-api-access-sg772\") pod \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\" (UID: \"157fe3c4-8421-43e6-ba76-2ed8eaba0032\") " Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.294581 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/157fe3c4-8421-43e6-ba76-2ed8eaba0032-host" (OuterVolumeSpecName: "host") pod "157fe3c4-8421-43e6-ba76-2ed8eaba0032" (UID: "157fe3c4-8421-43e6-ba76-2ed8eaba0032"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.295969 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/157fe3c4-8421-43e6-ba76-2ed8eaba0032-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.302586 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157fe3c4-8421-43e6-ba76-2ed8eaba0032-kube-api-access-sg772" (OuterVolumeSpecName: "kube-api-access-sg772") pod "157fe3c4-8421-43e6-ba76-2ed8eaba0032" (UID: "157fe3c4-8421-43e6-ba76-2ed8eaba0032"). InnerVolumeSpecName "kube-api-access-sg772". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:35 crc kubenswrapper[4858]: I0127 21:53:35.397636 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg772\" (UniqueName: \"kubernetes.io/projected/157fe3c4-8421-43e6-ba76-2ed8eaba0032-kube-api-access-sg772\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.085130 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="157fe3c4-8421-43e6-ba76-2ed8eaba0032" path="/var/lib/kubelet/pods/157fe3c4-8421-43e6-ba76-2ed8eaba0032/volumes" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.086285 4858 scope.go:117] "RemoveContainer" containerID="ad0755a3857787ace223d48fe823e8c71465386c7dfed4da451f2c46421d266c" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.086434 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-hhlvx" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.406971 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-msmwn"] Jan 27 21:53:36 crc kubenswrapper[4858]: E0127 21:53:36.407816 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157fe3c4-8421-43e6-ba76-2ed8eaba0032" containerName="container-00" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.407833 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="157fe3c4-8421-43e6-ba76-2ed8eaba0032" containerName="container-00" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.408086 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="157fe3c4-8421-43e6-ba76-2ed8eaba0032" containerName="container-00" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.408876 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.519661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vd72\" (UniqueName: \"kubernetes.io/projected/5505bc80-2735-4413-9f77-511e3f1308b9-kube-api-access-4vd72\") pod \"crc-debug-msmwn\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.519975 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5505bc80-2735-4413-9f77-511e3f1308b9-host\") pod \"crc-debug-msmwn\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.622034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vd72\" (UniqueName: \"kubernetes.io/projected/5505bc80-2735-4413-9f77-511e3f1308b9-kube-api-access-4vd72\") pod \"crc-debug-msmwn\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.622844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5505bc80-2735-4413-9f77-511e3f1308b9-host\") pod \"crc-debug-msmwn\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.623131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5505bc80-2735-4413-9f77-511e3f1308b9-host\") pod \"crc-debug-msmwn\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.642876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vd72\" (UniqueName: \"kubernetes.io/projected/5505bc80-2735-4413-9f77-511e3f1308b9-kube-api-access-4vd72\") pod \"crc-debug-msmwn\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:36 crc kubenswrapper[4858]: I0127 21:53:36.726058 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:37 crc kubenswrapper[4858]: I0127 21:53:37.098132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" event={"ID":"5505bc80-2735-4413-9f77-511e3f1308b9","Type":"ContainerStarted","Data":"0e3d008fc9bc9b65203f64711f32f66498bc7b14e33d4b543b8d8bc328792569"} Jan 27 21:53:37 crc kubenswrapper[4858]: I0127 21:53:37.098444 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" event={"ID":"5505bc80-2735-4413-9f77-511e3f1308b9","Type":"ContainerStarted","Data":"5e9f46fa1e294c14d9cd96bc5967e30ed832b43791474ea5b2d0295ea83b3ee4"} Jan 27 21:53:37 crc kubenswrapper[4858]: I0127 21:53:37.114656 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" podStartSLOduration=1.114634714 podStartE2EDuration="1.114634714s" podCreationTimestamp="2026-01-27 21:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 21:53:37.110052404 +0000 UTC m=+6361.817868110" watchObservedRunningTime="2026-01-27 21:53:37.114634714 +0000 UTC m=+6361.822450420" Jan 27 21:53:38 crc kubenswrapper[4858]: I0127 21:53:38.116719 4858 generic.go:334] "Generic (PLEG): container finished" podID="5505bc80-2735-4413-9f77-511e3f1308b9" containerID="0e3d008fc9bc9b65203f64711f32f66498bc7b14e33d4b543b8d8bc328792569" exitCode=0 Jan 27 21:53:38 crc kubenswrapper[4858]: I0127 21:53:38.116966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" event={"ID":"5505bc80-2735-4413-9f77-511e3f1308b9","Type":"ContainerDied","Data":"0e3d008fc9bc9b65203f64711f32f66498bc7b14e33d4b543b8d8bc328792569"} Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.274615 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.371639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vd72\" (UniqueName: \"kubernetes.io/projected/5505bc80-2735-4413-9f77-511e3f1308b9-kube-api-access-4vd72\") pod \"5505bc80-2735-4413-9f77-511e3f1308b9\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.371858 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5505bc80-2735-4413-9f77-511e3f1308b9-host\") pod \"5505bc80-2735-4413-9f77-511e3f1308b9\" (UID: \"5505bc80-2735-4413-9f77-511e3f1308b9\") " Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.372002 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5505bc80-2735-4413-9f77-511e3f1308b9-host" (OuterVolumeSpecName: "host") pod "5505bc80-2735-4413-9f77-511e3f1308b9" (UID: "5505bc80-2735-4413-9f77-511e3f1308b9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.372381 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5505bc80-2735-4413-9f77-511e3f1308b9-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.381607 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-msmwn"] Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.384043 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5505bc80-2735-4413-9f77-511e3f1308b9-kube-api-access-4vd72" (OuterVolumeSpecName: "kube-api-access-4vd72") pod "5505bc80-2735-4413-9f77-511e3f1308b9" (UID: "5505bc80-2735-4413-9f77-511e3f1308b9"). InnerVolumeSpecName "kube-api-access-4vd72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.396140 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-msmwn"] Jan 27 21:53:39 crc kubenswrapper[4858]: I0127 21:53:39.474694 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vd72\" (UniqueName: \"kubernetes.io/projected/5505bc80-2735-4413-9f77-511e3f1308b9-kube-api-access-4vd72\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.080842 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5505bc80-2735-4413-9f77-511e3f1308b9" path="/var/lib/kubelet/pods/5505bc80-2735-4413-9f77-511e3f1308b9/volumes" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.141224 4858 scope.go:117] "RemoveContainer" containerID="0e3d008fc9bc9b65203f64711f32f66498bc7b14e33d4b543b8d8bc328792569" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.141265 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-msmwn" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.615772 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-tjzmp"] Jan 27 21:53:40 crc kubenswrapper[4858]: E0127 21:53:40.616275 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5505bc80-2735-4413-9f77-511e3f1308b9" containerName="container-00" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.616293 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5505bc80-2735-4413-9f77-511e3f1308b9" containerName="container-00" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.616645 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5505bc80-2735-4413-9f77-511e3f1308b9" containerName="container-00" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.617637 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.800231 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-host\") pod \"crc-debug-tjzmp\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.800282 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2vqw\" (UniqueName: \"kubernetes.io/projected/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-kube-api-access-x2vqw\") pod \"crc-debug-tjzmp\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.902876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-host\") pod \"crc-debug-tjzmp\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.902941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2vqw\" (UniqueName: \"kubernetes.io/projected/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-kube-api-access-x2vqw\") pod \"crc-debug-tjzmp\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.903073 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-host\") pod \"crc-debug-tjzmp\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.923375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2vqw\" (UniqueName: \"kubernetes.io/projected/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-kube-api-access-x2vqw\") pod \"crc-debug-tjzmp\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: I0127 21:53:40.938012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:40 crc kubenswrapper[4858]: W0127 21:53:40.961072 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode00fbb9e_09f9_452b_9c9c_7dc0470c9393.slice/crio-1ec4acfdb7fa89cab503da91fcf5873399d523cd1791214335d83ee26637e88f WatchSource:0}: Error finding container 1ec4acfdb7fa89cab503da91fcf5873399d523cd1791214335d83ee26637e88f: Status 404 returned error can't find the container with id 1ec4acfdb7fa89cab503da91fcf5873399d523cd1791214335d83ee26637e88f Jan 27 21:53:41 crc kubenswrapper[4858]: I0127 21:53:41.158659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" event={"ID":"e00fbb9e-09f9-452b-9c9c-7dc0470c9393","Type":"ContainerStarted","Data":"1ec4acfdb7fa89cab503da91fcf5873399d523cd1791214335d83ee26637e88f"} Jan 27 21:53:42 crc kubenswrapper[4858]: I0127 21:53:42.177678 4858 generic.go:334] "Generic (PLEG): container finished" podID="e00fbb9e-09f9-452b-9c9c-7dc0470c9393" containerID="18d929d9794e3e9ea4ee27a6c75f8c8dd294152628801ccebae84a2703c44345" exitCode=0 Jan 27 21:53:42 crc kubenswrapper[4858]: I0127 21:53:42.177824 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" event={"ID":"e00fbb9e-09f9-452b-9c9c-7dc0470c9393","Type":"ContainerDied","Data":"18d929d9794e3e9ea4ee27a6c75f8c8dd294152628801ccebae84a2703c44345"} Jan 27 21:53:42 crc kubenswrapper[4858]: I0127 21:53:42.222635 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-tjzmp"] Jan 27 21:53:42 crc kubenswrapper[4858]: I0127 21:53:42.235589 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5r5ms/crc-debug-tjzmp"] Jan 27 21:53:43 crc kubenswrapper[4858]: I0127 21:53:43.303565 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:43 crc kubenswrapper[4858]: I0127 21:53:43.362445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-host\") pod \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " Jan 27 21:53:43 crc kubenswrapper[4858]: I0127 21:53:43.362540 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2vqw\" (UniqueName: \"kubernetes.io/projected/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-kube-api-access-x2vqw\") pod \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\" (UID: \"e00fbb9e-09f9-452b-9c9c-7dc0470c9393\") " Jan 27 21:53:43 crc kubenswrapper[4858]: I0127 21:53:43.363329 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-host" (OuterVolumeSpecName: "host") pod "e00fbb9e-09f9-452b-9c9c-7dc0470c9393" (UID: "e00fbb9e-09f9-452b-9c9c-7dc0470c9393"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 21:53:43 crc kubenswrapper[4858]: I0127 21:53:43.368731 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-kube-api-access-x2vqw" (OuterVolumeSpecName: "kube-api-access-x2vqw") pod "e00fbb9e-09f9-452b-9c9c-7dc0470c9393" (UID: "e00fbb9e-09f9-452b-9c9c-7dc0470c9393"). InnerVolumeSpecName "kube-api-access-x2vqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:53:43 crc kubenswrapper[4858]: I0127 21:53:43.464517 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-host\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:43 crc kubenswrapper[4858]: I0127 21:53:43.464599 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2vqw\" (UniqueName: \"kubernetes.io/projected/e00fbb9e-09f9-452b-9c9c-7dc0470c9393-kube-api-access-x2vqw\") on node \"crc\" DevicePath \"\"" Jan 27 21:53:44 crc kubenswrapper[4858]: I0127 21:53:44.085033 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00fbb9e-09f9-452b-9c9c-7dc0470c9393" path="/var/lib/kubelet/pods/e00fbb9e-09f9-452b-9c9c-7dc0470c9393/volumes" Jan 27 21:53:44 crc kubenswrapper[4858]: I0127 21:53:44.196682 4858 scope.go:117] "RemoveContainer" containerID="18d929d9794e3e9ea4ee27a6c75f8c8dd294152628801ccebae84a2703c44345" Jan 27 21:53:44 crc kubenswrapper[4858]: I0127 21:53:44.196730 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/crc-debug-tjzmp" Jan 27 21:53:59 crc kubenswrapper[4858]: I0127 21:53:59.329085 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 21:53:59 crc kubenswrapper[4858]: I0127 21:53:59.329660 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 21:53:59 crc kubenswrapper[4858]: I0127 21:53:59.329713 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 21:53:59 crc kubenswrapper[4858]: I0127 21:53:59.330461 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 21:53:59 crc kubenswrapper[4858]: I0127 21:53:59.330518 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" gracePeriod=600 Jan 27 21:53:59 crc kubenswrapper[4858]: E0127 21:53:59.459138 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:54:00 crc kubenswrapper[4858]: I0127 21:54:00.372932 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" exitCode=0 Jan 27 21:54:00 crc kubenswrapper[4858]: I0127 21:54:00.372980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5"} Jan 27 21:54:00 crc kubenswrapper[4858]: I0127 21:54:00.373027 4858 scope.go:117] "RemoveContainer" containerID="329130a47c7e1d937f1974012a1991feff790eda04f6fd53757ddd5c0aec43b1" Jan 27 21:54:00 crc kubenswrapper[4858]: I0127 21:54:00.373778 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:54:00 crc kubenswrapper[4858]: E0127 21:54:00.374124 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:54:11 crc kubenswrapper[4858]: I0127 21:54:11.071937 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:54:11 crc kubenswrapper[4858]: E0127 21:54:11.072711 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:54:22 crc kubenswrapper[4858]: I0127 21:54:22.071369 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:54:22 crc kubenswrapper[4858]: E0127 21:54:22.072153 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:54:32 crc kubenswrapper[4858]: I0127 21:54:32.480177 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69dcf58cf6-v246z_9651b951-4ad0-42ae-85fb-176da5b8ccdf/barbican-api/0.log" Jan 27 21:54:32 crc kubenswrapper[4858]: I0127 21:54:32.671444 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69dcf58cf6-v246z_9651b951-4ad0-42ae-85fb-176da5b8ccdf/barbican-api-log/0.log" Jan 27 21:54:32 crc kubenswrapper[4858]: I0127 21:54:32.699683 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-58f4598744-qn5jn_9927d309-f818-4163-9659-f7b6a060960e/barbican-keystone-listener/0.log" Jan 27 21:54:32 crc kubenswrapper[4858]: I0127 21:54:32.790571 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-58f4598744-qn5jn_9927d309-f818-4163-9659-f7b6a060960e/barbican-keystone-listener-log/0.log" Jan 27 21:54:32 crc kubenswrapper[4858]: I0127 21:54:32.917048 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5598668497-6nzrb_9bb04320-907b-4d35-9c41-ea828a779f5d/barbican-worker/0.log" Jan 27 21:54:32 crc kubenswrapper[4858]: I0127 21:54:32.966215 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5598668497-6nzrb_9bb04320-907b-4d35-9c41-ea828a779f5d/barbican-worker-log/0.log" Jan 27 21:54:33 crc kubenswrapper[4858]: I0127 21:54:33.113053 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-72gfj_20ffe28d-a9df-4416-85b7-c501d7555431/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:33 crc kubenswrapper[4858]: I0127 21:54:33.250050 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/ceilometer-central-agent/0.log" Jan 27 21:54:33 crc kubenswrapper[4858]: I0127 21:54:33.310448 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/ceilometer-notification-agent/0.log" Jan 27 21:54:33 crc kubenswrapper[4858]: I0127 21:54:33.414846 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/proxy-httpd/0.log" Jan 27 21:54:33 crc kubenswrapper[4858]: I0127 21:54:33.457255 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_4e9e36b1-d81b-4be3-a0d7-ee413bdece24/sg-core/0.log" Jan 27 21:54:33 crc kubenswrapper[4858]: I0127 21:54:33.673075 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_29bdfd71-369f-46e8-be09-4e5b5bb22d1a/cinder-api-log/0.log" Jan 27 21:54:33 crc kubenswrapper[4858]: I0127 21:54:33.945770 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2106fe3e-bec7-4072-ba21-4f55b4a1b37a/probe/0.log" Jan 27 21:54:34 crc kubenswrapper[4858]: I0127 21:54:34.247067 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d853bb36-2749-40a8-9533-4caa077b1812/cinder-scheduler/0.log" Jan 27 21:54:34 crc kubenswrapper[4858]: I0127 21:54:34.285160 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_29bdfd71-369f-46e8-be09-4e5b5bb22d1a/cinder-api/0.log" Jan 27 21:54:34 crc kubenswrapper[4858]: I0127 21:54:34.291024 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2106fe3e-bec7-4072-ba21-4f55b4a1b37a/cinder-backup/0.log" Jan 27 21:54:34 crc kubenswrapper[4858]: I0127 21:54:34.325198 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d853bb36-2749-40a8-9533-4caa077b1812/probe/0.log" Jan 27 21:54:34 crc kubenswrapper[4858]: I0127 21:54:34.524794 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_292ec3c5-71af-43c3-8bee-e815c3876637/probe/0.log" Jan 27 21:54:34 crc kubenswrapper[4858]: I0127 21:54:34.760873 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_292ec3c5-71af-43c3-8bee-e815c3876637/cinder-volume/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.100580 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d/probe/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.220242 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hf82v_8f14a76f-9e03-4695-98e5-c1efe11ae337/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.245264 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_9ca7dea3-c240-4c7e-bc05-efe3ca4a9f9d/cinder-volume/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.418922 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mbwng_70c1d5d9-384f-4155-b4bc-cdc9185090f0/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.492067 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694549759-h5nzn_d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e/init/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.678868 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694549759-h5nzn_d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e/init/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.753569 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wfhgt_d59ffd9a-001c-400a-b79b-4617489956ed/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:35 crc kubenswrapper[4858]: I0127 21:54:35.836409 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-694549759-h5nzn_d57b621e-ff8c-44b2-8d9a-2cbe53bdf40e/dnsmasq-dns/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.043666 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_be03aee4-8299-48e7-91cb-18bbad0b2a0b/glance-httpd/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.130512 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_be03aee4-8299-48e7-91cb-18bbad0b2a0b/glance-log/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.181688 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_719be79f-34ec-4e95-b1a7-e507c6214053/glance-log/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.201067 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_719be79f-34ec-4e95-b1a7-e507c6214053/glance-httpd/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.475487 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57556bc8bb-j4fhs_996129af-9ae9-44ca-b677-2c27bf71847d/horizon/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.488730 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-d76mj_a05def4c-d0a6-4e87-8b26-8d72512941a2/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.722929 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lzndd_78e25299-cf17-451b-8f2f-d980ff184dac/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:36 crc kubenswrapper[4858]: I0127 21:54:36.944853 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492461-vqlt7_5b18bb1a-5b75-4c25-b553-12b03b2492a0/keystone-cron/0.log" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.072948 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:54:37 crc kubenswrapper[4858]: E0127 21:54:37.073371 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.151745 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_4d832896-a304-4a89-8ef2-607eea6623e5/kube-state-metrics/0.log" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.309588 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k8v4d_3c49607c-dca5-4943-acbc-5c13058a99df/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.339322 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57556bc8bb-j4fhs_996129af-9ae9-44ca-b677-2c27bf71847d/horizon-log/0.log" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.414287 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5bf568dbc7-xlg4d_2c447296-df73-4efb-b85b-dc9d468d2d80/keystone-api/0.log" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.882577 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c77755cc5-ffvng_4991d38c-6548-43d3-b4b7-884b71af9f07/neutron-api/0.log" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.909633 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-246cx_d30a30e0-0d38-4abd-8fc2-71b1ddce069a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:37 crc kubenswrapper[4858]: I0127 21:54:37.914378 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c77755cc5-ffvng_4991d38c-6548-43d3-b4b7-884b71af9f07/neutron-httpd/0.log" Jan 27 21:54:38 crc kubenswrapper[4858]: I0127 21:54:38.839152 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ed5a0e2c-bf3a-47c8-aecd-be2cd0b426b0/nova-cell0-conductor-conductor/0.log" Jan 27 21:54:39 crc kubenswrapper[4858]: I0127 21:54:39.209388 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_20379f76-6255-45c6-ba02-55526177a0c0/nova-cell1-conductor-conductor/0.log" Jan 27 21:54:39 crc kubenswrapper[4858]: I0127 21:54:39.535909 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e4fe74ae-d5b4-4a27-9bfe-e0039fa7ce91/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 21:54:39 crc kubenswrapper[4858]: I0127 21:54:39.736448 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b104f9b-a37f-44bc-875f-03ce0d396c57/nova-api-log/0.log" Jan 27 21:54:39 crc kubenswrapper[4858]: I0127 21:54:39.834223 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-87t6g_cee2f5ea-c848-418b-975f-ba255506d1ae/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:40 crc kubenswrapper[4858]: I0127 21:54:40.133598 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24/nova-metadata-log/0.log" Jan 27 21:54:40 crc kubenswrapper[4858]: I0127 21:54:40.604792 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4b104f9b-a37f-44bc-875f-03ce0d396c57/nova-api-api/0.log" Jan 27 21:54:40 crc kubenswrapper[4858]: I0127 21:54:40.615528 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f7f223cd-763c-408e-a3cf-067af57416af/mysql-bootstrap/0.log" Jan 27 21:54:40 crc kubenswrapper[4858]: I0127 21:54:40.763647 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6f1620a5-c040-450a-a149-e7bf421b80d9/nova-scheduler-scheduler/0.log" Jan 27 21:54:40 crc kubenswrapper[4858]: I0127 21:54:40.864879 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f7f223cd-763c-408e-a3cf-067af57416af/mysql-bootstrap/0.log" Jan 27 21:54:40 crc kubenswrapper[4858]: I0127 21:54:40.893086 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f7f223cd-763c-408e-a3cf-067af57416af/galera/0.log" Jan 27 21:54:41 crc kubenswrapper[4858]: I0127 21:54:41.116965 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4768f41e-8ff0-4cec-b741-75f8902eb0e8/mysql-bootstrap/0.log" Jan 27 21:54:41 crc kubenswrapper[4858]: I0127 21:54:41.320048 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4768f41e-8ff0-4cec-b741-75f8902eb0e8/mysql-bootstrap/0.log" Jan 27 21:54:41 crc kubenswrapper[4858]: I0127 21:54:41.332675 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4768f41e-8ff0-4cec-b741-75f8902eb0e8/galera/0.log" Jan 27 21:54:41 crc kubenswrapper[4858]: I0127 21:54:41.652059 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_892953fc-7620-4274-9f89-c86e2ec23782/openstackclient/0.log" Jan 27 21:54:41 crc kubenswrapper[4858]: I0127 21:54:41.673940 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jc5cc_d9e646d5-48a6-4c3b-8fb9-fec2a17d5eaa/ovn-controller/0.log" Jan 27 21:54:42 crc kubenswrapper[4858]: I0127 21:54:42.062487 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wzfwm_51e0f41c-e13e-41d1-bc48-71c6ef96994c/openstack-network-exporter/0.log" Jan 27 21:54:42 crc kubenswrapper[4858]: I0127 21:54:42.190447 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovsdb-server-init/0.log" Jan 27 21:54:42 crc kubenswrapper[4858]: I0127 21:54:42.508132 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovsdb-server-init/0.log" Jan 27 21:54:42 crc kubenswrapper[4858]: I0127 21:54:42.515609 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovsdb-server/0.log" Jan 27 21:54:42 crc kubenswrapper[4858]: I0127 21:54:42.808296 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2a9b68aa-8d04-462e-9d8a-0c0bfa73dc24/nova-metadata-metadata/0.log" Jan 27 21:54:42 crc kubenswrapper[4858]: I0127 21:54:42.812304 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xlcx9_1e29e00b-0b7a-4415-a1b1-abd8aec81f9e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:42 crc kubenswrapper[4858]: I0127 21:54:42.910630 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vhbc7_b28f0be1-aa4f-445d-95c3-1abd84b9c82a/ovs-vswitchd/0.log" Jan 27 21:54:43 crc kubenswrapper[4858]: I0127 21:54:43.305125 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c57afc3-0c88-46a8-ab70-332b1a43ee7f/openstack-network-exporter/0.log" Jan 27 21:54:43 crc kubenswrapper[4858]: I0127 21:54:43.354195 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c57afc3-0c88-46a8-ab70-332b1a43ee7f/ovn-northd/0.log" Jan 27 21:54:43 crc kubenswrapper[4858]: I0127 21:54:43.457181 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b049e044-9171-4011-9c90-c334fa955321/openstack-network-exporter/0.log" Jan 27 21:54:43 crc kubenswrapper[4858]: I0127 21:54:43.549603 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b049e044-9171-4011-9c90-c334fa955321/ovsdbserver-nb/0.log" Jan 27 21:54:43 crc kubenswrapper[4858]: I0127 21:54:43.707916 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_13a7d533-55e2-4072-add8-4cd41613da8a/ovsdbserver-sb/0.log" Jan 27 21:54:43 crc kubenswrapper[4858]: I0127 21:54:43.717477 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_13a7d533-55e2-4072-add8-4cd41613da8a/openstack-network-exporter/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.195510 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/init-config-reloader/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.199065 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9b655566-275d7_598238a6-e427-47db-b460-298627190cce/placement-api/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.325991 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5f9b655566-275d7_598238a6-e427-47db-b460-298627190cce/placement-log/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.404905 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/config-reloader/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.433521 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/init-config-reloader/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.484382 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/prometheus/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.607106 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8aaed51-c0b1-4242-8d7b-a4256539e2ea/setup-container/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.612160 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0511fb5d-042b-4155-88f8-3711949342c5/thanos-sidecar/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.878639 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8aaed51-c0b1-4242-8d7b-a4256539e2ea/setup-container/0.log" Jan 27 21:54:44 crc kubenswrapper[4858]: I0127 21:54:44.917283 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d8aaed51-c0b1-4242-8d7b-a4256539e2ea/rabbitmq/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.048458 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c539609-6c9e-46bc-a0d7-6a629e83ce17/setup-container/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.240862 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c539609-6c9e-46bc-a0d7-6a629e83ce17/setup-container/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.329190 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e61ce5ac-61b7-41f3-aab6-c4b2e03978d1/setup-container/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.375287 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c539609-6c9e-46bc-a0d7-6a629e83ce17/rabbitmq/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.598743 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e61ce5ac-61b7-41f3-aab6-c4b2e03978d1/setup-container/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.649358 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e61ce5ac-61b7-41f3-aab6-c4b2e03978d1/rabbitmq/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.692101 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r79db_7a0534e5-d746-4c1e-93a0-9cd2b4f79271/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:45 crc kubenswrapper[4858]: I0127 21:54:45.905997 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d646h_dea36689-21b8-4ef7-9ead-35b516cb5f60/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.019809 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8454l_dfd9ae76-5a01-46af-995d-6fa271c1e3b8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.216004 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ftc8l_c5817364-db24-4f51-b709-6ec41b069f0b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.300351 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-rn685_7ea0384d-7b36-4de2-8718-58c49d6a8ef8/ssh-known-hosts-edpm-deployment/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.550168 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-574fc98977-sp7zp_57e04641-598d-459b-9996-0ae4182ae4fb/proxy-server/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.671815 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-574fc98977-sp7zp_57e04641-598d-459b-9996-0ae4182ae4fb/proxy-httpd/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.721676 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-msjpr_e95660bd-4df7-4b1f-8dd1-8183870d0c8e/swift-ring-rebalance/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.904660 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-auditor/0.log" Jan 27 21:54:46 crc kubenswrapper[4858]: I0127 21:54:46.930524 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-reaper/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.058943 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-replicator/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.108895 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/account-server/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.208612 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-auditor/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.221109 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-replicator/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.298449 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-server/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.400461 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/container-updater/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.457534 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-auditor/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.474653 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-expirer/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.581741 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-server/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.622891 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-replicator/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.888281 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/object-updater/0.log" Jan 27 21:54:47 crc kubenswrapper[4858]: I0127 21:54:47.913995 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/rsync/0.log" Jan 27 21:54:48 crc kubenswrapper[4858]: I0127 21:54:48.033323 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_177247c1-763d-4d0c-81ba-f538937f0008/swift-recon-cron/0.log" Jan 27 21:54:48 crc kubenswrapper[4858]: I0127 21:54:48.162060 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-sxmhm_9116e36c-794b-4e0c-ad98-58f8daa17fc1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:48 crc kubenswrapper[4858]: I0127 21:54:48.345091 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0671e111-61e9-439b-9457-c29b7d18a1f7/tempest-tests-tempest-tests-runner/0.log" Jan 27 21:54:48 crc kubenswrapper[4858]: I0127 21:54:48.404841 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4bc44a79-c7d4-472d-9d17-2b69e894630f/test-operator-logs-container/0.log" Jan 27 21:54:48 crc kubenswrapper[4858]: I0127 21:54:48.525932 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-rp4qk_29f6143e-1aa7-4d0f-91ce-267d3e2fe84e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 21:54:49 crc kubenswrapper[4858]: I0127 21:54:49.367221 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_16f660ae-e2f1-4e87-9e6a-83338f9228e9/watcher-applier/0.log" Jan 27 21:54:50 crc kubenswrapper[4858]: I0127 21:54:50.062818 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_49af05ef-dc73-4178-a4f8-ce9191c8fa3d/watcher-api-log/0.log" Jan 27 21:54:50 crc kubenswrapper[4858]: I0127 21:54:50.070731 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:54:50 crc kubenswrapper[4858]: E0127 21:54:50.071052 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:54:51 crc kubenswrapper[4858]: I0127 21:54:51.890183 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bd2f70df-955b-44ba-a1be-a2f9d06a862c/memcached/0.log" Jan 27 21:54:52 crc kubenswrapper[4858]: I0127 21:54:52.761293 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_7bcc0d9d-f611-4dd6-96ab-41df437ab21d/watcher-decision-engine/0.log" Jan 27 21:54:53 crc kubenswrapper[4858]: I0127 21:54:53.837838 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_49af05ef-dc73-4178-a4f8-ce9191c8fa3d/watcher-api/0.log" Jan 27 21:55:01 crc kubenswrapper[4858]: I0127 21:55:01.072012 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:55:01 crc kubenswrapper[4858]: E0127 21:55:01.073706 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:55:12 crc kubenswrapper[4858]: I0127 21:55:12.072607 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:55:12 crc kubenswrapper[4858]: E0127 21:55:12.074830 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:55:17 crc kubenswrapper[4858]: I0127 21:55:17.636198 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/util/0.log" Jan 27 21:55:17 crc kubenswrapper[4858]: I0127 21:55:17.802157 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/util/0.log" Jan 27 21:55:17 crc kubenswrapper[4858]: I0127 21:55:17.851335 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/pull/0.log" Jan 27 21:55:17 crc kubenswrapper[4858]: I0127 21:55:17.856702 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/pull/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.052278 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/pull/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.053438 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/util/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.109972 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_32272559af14db1563f66d7190c2b5f031ee942088e04accd760d0031b7vvf6_52190d7c-3903-46b5-8fa4-96ef6b154bbe/extract/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.290375 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pl99n_f2bb693c-1d95-483e-b7c5-151516bd015e/manager/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.336430 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-6tsd5_50605190-4834-4573-b8c9-70f5ca60b820/manager/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.453142 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-qlssw_fd5e8600-d46a-4463-b592-f6d6025bf66f/manager/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.570624 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-hg2t5_eba796fd-f7a8-4f83-9a75-7036f77d73f1/manager/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.668191 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-m4pbf_6074b126-8795-48bc-8984-fc25402032a2/manager/0.log" Jan 27 21:55:18 crc kubenswrapper[4858]: I0127 21:55:18.781981 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6rrnl_e86b137e-cd0c-4243-801f-dad4eb19373b/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.007859 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-k69nl_397758a8-62c2-41ba-8177-5309d797bb2f/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.209759 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-tskvm_f5527334-db65-4031-a24f-9aafcffb6708/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.253815 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-cnhqv_7de86ff1-90b3-470b-bab1-344555db1153/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.261799 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-dxfwr_8a5eb91f-e957-4f9d-86c9-5f8905c6bee4/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.467821 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-54b92_446c00be-b860-4220-bcc1-457005d92650/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.537042 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-m6lz4_feb30e7d-db27-4e87-ba07-f4730b228588/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.738340 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-dxhnn_74b2bb8d-cae5-4033-b999-73e3ed604cb9/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.761525 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-f7gwl_314a20ef-a97b-40a6-8a85-b118e64d9a3a/manager/0.log" Jan 27 21:55:19 crc kubenswrapper[4858]: I0127 21:55:19.925769 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854dlsrh_c3bd5d36-c726-4b79-9c08-22bb23dabc28/manager/0.log" Jan 27 21:55:20 crc kubenswrapper[4858]: I0127 21:55:20.086474 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6f9f75d44c-9lgbg_307753ad-bb67-4220-9b56-e588037652f4/operator/0.log" Jan 27 21:55:20 crc kubenswrapper[4858]: I0127 21:55:20.276270 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-w7wsb_ba4e832d-ff36-45e9-90b9-44e125906dba/registry-server/0.log" Jan 27 21:55:20 crc kubenswrapper[4858]: I0127 21:55:20.504910 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-jrl5h_968ee010-0e16-462d-82d3-7c5d61f107a1/manager/0.log" Jan 27 21:55:20 crc kubenswrapper[4858]: I0127 21:55:20.666746 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-8st2f_c4dfc413-8d91-4a08-aef6-47188c0971c4/manager/0.log" Jan 27 21:55:20 crc kubenswrapper[4858]: I0127 21:55:20.880901 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tc2j8_9e4c347f-b102-40c1-8935-77fdef528d14/operator/0.log" Jan 27 21:55:21 crc kubenswrapper[4858]: I0127 21:55:21.068702 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-cp6lt_b778d97d-e9dc-4017-94ff-9cfd82322a3a/manager/0.log" Jan 27 21:55:21 crc kubenswrapper[4858]: I0127 21:55:21.402755 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-7w8hk_c15c4bec-780c-42d1-8f36-618b255a95f6/manager/0.log" Jan 27 21:55:21 crc kubenswrapper[4858]: I0127 21:55:21.507482 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-sgtcz_304980dc-cb07-41fa-ba11-1262d5a2b43b/manager/0.log" Jan 27 21:55:21 crc kubenswrapper[4858]: I0127 21:55:21.556450 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86d6949bb8-k78rw_f75129ba-73c8-4f91-99b0-42d191fb0510/manager/0.log" Jan 27 21:55:21 crc kubenswrapper[4858]: I0127 21:55:21.704060 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5975f685d8-snnk5_112cff1f-1841-4fe8-96e2-95d2be2957a2/manager/0.log" Jan 27 21:55:24 crc kubenswrapper[4858]: I0127 21:55:24.071534 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:55:24 crc kubenswrapper[4858]: E0127 21:55:24.072123 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:55:38 crc kubenswrapper[4858]: I0127 21:55:38.071032 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:55:38 crc kubenswrapper[4858]: E0127 21:55:38.071776 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:55:40 crc kubenswrapper[4858]: I0127 21:55:40.113151 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c6zzp_01f33b82-5877-4c9d-ba44-3c6676c5f41d/control-plane-machine-set-operator/0.log" Jan 27 21:55:40 crc kubenswrapper[4858]: I0127 21:55:40.311942 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mqblw_f20c3023-909c-4904-b65a-f4627bf28119/kube-rbac-proxy/0.log" Jan 27 21:55:40 crc kubenswrapper[4858]: I0127 21:55:40.313153 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mqblw_f20c3023-909c-4904-b65a-f4627bf28119/machine-api-operator/0.log" Jan 27 21:55:52 crc kubenswrapper[4858]: I0127 21:55:52.599873 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-n8kqf_f425f50a-9405-4c04-b320-22524d815b8a/cert-manager-controller/0.log" Jan 27 21:55:52 crc kubenswrapper[4858]: I0127 21:55:52.795581 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-x9cph_92b94f6b-96ed-4ee3-96e6-8d1c22358773/cert-manager-cainjector/0.log" Jan 27 21:55:52 crc kubenswrapper[4858]: I0127 21:55:52.822373 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-86ftq_7d2f237c-d08c-479b-a3e3-7ef983dc2c41/cert-manager-webhook/0.log" Jan 27 21:55:53 crc kubenswrapper[4858]: I0127 21:55:53.070817 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:55:53 crc kubenswrapper[4858]: E0127 21:55:53.071100 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:56:03 crc kubenswrapper[4858]: I0127 21:56:03.888478 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-65pb8"] Jan 27 21:56:03 crc kubenswrapper[4858]: E0127 21:56:03.889651 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00fbb9e-09f9-452b-9c9c-7dc0470c9393" containerName="container-00" Jan 27 21:56:03 crc kubenswrapper[4858]: I0127 21:56:03.889672 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00fbb9e-09f9-452b-9c9c-7dc0470c9393" containerName="container-00" Jan 27 21:56:03 crc kubenswrapper[4858]: I0127 21:56:03.889980 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00fbb9e-09f9-452b-9c9c-7dc0470c9393" containerName="container-00" Jan 27 21:56:03 crc kubenswrapper[4858]: I0127 21:56:03.891817 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:03 crc kubenswrapper[4858]: I0127 21:56:03.903723 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-65pb8"] Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.085176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-catalog-content\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.085320 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-utilities\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.085345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9ccq\" (UniqueName: \"kubernetes.io/projected/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-kube-api-access-h9ccq\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.187656 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-utilities\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.187718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9ccq\" (UniqueName: \"kubernetes.io/projected/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-kube-api-access-h9ccq\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.187853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-catalog-content\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.188109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-utilities\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.188611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-catalog-content\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.213913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9ccq\" (UniqueName: \"kubernetes.io/projected/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-kube-api-access-h9ccq\") pod \"redhat-marketplace-65pb8\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.222365 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:04 crc kubenswrapper[4858]: I0127 21:56:04.746401 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-65pb8"] Jan 27 21:56:05 crc kubenswrapper[4858]: I0127 21:56:05.603752 4858 generic.go:334] "Generic (PLEG): container finished" podID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerID="d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f" exitCode=0 Jan 27 21:56:05 crc kubenswrapper[4858]: I0127 21:56:05.603886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65pb8" event={"ID":"73618e4a-61d4-40e5-bbf7-6b72b7b611c5","Type":"ContainerDied","Data":"d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f"} Jan 27 21:56:05 crc kubenswrapper[4858]: I0127 21:56:05.604361 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65pb8" event={"ID":"73618e4a-61d4-40e5-bbf7-6b72b7b611c5","Type":"ContainerStarted","Data":"371359c6c12890d809727a901cc30dba5d314e3def5bd9177e7e24b8c4a6ee1c"} Jan 27 21:56:05 crc kubenswrapper[4858]: I0127 21:56:05.607776 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 21:56:05 crc kubenswrapper[4858]: I0127 21:56:05.755512 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-pjzt5_8ee1edac-ca66-4ed5-a281-67b735710be5/nmstate-console-plugin/0.log" Jan 27 21:56:05 crc kubenswrapper[4858]: I0127 21:56:05.942027 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xxvgs_7bfb1746-53f8-427e-ab49-1b84279b9437/nmstate-handler/0.log" Jan 27 21:56:05 crc kubenswrapper[4858]: I0127 21:56:05.966202 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dqkn5_a9cfc031-eed0-42fd-94cc-707c19c84cae/kube-rbac-proxy/0.log" Jan 27 21:56:06 crc kubenswrapper[4858]: I0127 21:56:06.071597 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:56:06 crc kubenswrapper[4858]: E0127 21:56:06.071908 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:56:06 crc kubenswrapper[4858]: I0127 21:56:06.093894 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-dqkn5_a9cfc031-eed0-42fd-94cc-707c19c84cae/nmstate-metrics/0.log" Jan 27 21:56:06 crc kubenswrapper[4858]: I0127 21:56:06.188840 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-9z7zh_613b924b-b7a1-4507-94ed-be8377c1d87d/nmstate-operator/0.log" Jan 27 21:56:06 crc kubenswrapper[4858]: I0127 21:56:06.292979 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-6bf2p_f00f3a98-58f2-445c-a008-290a987092a2/nmstate-webhook/0.log" Jan 27 21:56:06 crc kubenswrapper[4858]: I0127 21:56:06.614150 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65pb8" event={"ID":"73618e4a-61d4-40e5-bbf7-6b72b7b611c5","Type":"ContainerStarted","Data":"274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908"} Jan 27 21:56:07 crc kubenswrapper[4858]: I0127 21:56:07.623892 4858 generic.go:334] "Generic (PLEG): container finished" podID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerID="274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908" exitCode=0 Jan 27 21:56:07 crc kubenswrapper[4858]: I0127 21:56:07.623937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65pb8" event={"ID":"73618e4a-61d4-40e5-bbf7-6b72b7b611c5","Type":"ContainerDied","Data":"274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908"} Jan 27 21:56:08 crc kubenswrapper[4858]: I0127 21:56:08.636274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65pb8" event={"ID":"73618e4a-61d4-40e5-bbf7-6b72b7b611c5","Type":"ContainerStarted","Data":"316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd"} Jan 27 21:56:08 crc kubenswrapper[4858]: I0127 21:56:08.663957 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-65pb8" podStartSLOduration=3.283827089 podStartE2EDuration="5.66393448s" podCreationTimestamp="2026-01-27 21:56:03 +0000 UTC" firstStartedPulling="2026-01-27 21:56:05.607461443 +0000 UTC m=+6510.315277149" lastFinishedPulling="2026-01-27 21:56:07.987568824 +0000 UTC m=+6512.695384540" observedRunningTime="2026-01-27 21:56:08.653213505 +0000 UTC m=+6513.361029231" watchObservedRunningTime="2026-01-27 21:56:08.66393448 +0000 UTC m=+6513.371750186" Jan 27 21:56:14 crc kubenswrapper[4858]: I0127 21:56:14.223519 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:14 crc kubenswrapper[4858]: I0127 21:56:14.224058 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:14 crc kubenswrapper[4858]: I0127 21:56:14.294162 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:14 crc kubenswrapper[4858]: I0127 21:56:14.745218 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:14 crc kubenswrapper[4858]: I0127 21:56:14.815322 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-65pb8"] Jan 27 21:56:16 crc kubenswrapper[4858]: I0127 21:56:16.704439 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-65pb8" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="registry-server" containerID="cri-o://316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd" gracePeriod=2 Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.160282 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.261095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9ccq\" (UniqueName: \"kubernetes.io/projected/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-kube-api-access-h9ccq\") pod \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.261197 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-catalog-content\") pod \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.261411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-utilities\") pod \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\" (UID: \"73618e4a-61d4-40e5-bbf7-6b72b7b611c5\") " Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.262255 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-utilities" (OuterVolumeSpecName: "utilities") pod "73618e4a-61d4-40e5-bbf7-6b72b7b611c5" (UID: "73618e4a-61d4-40e5-bbf7-6b72b7b611c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.263252 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.274078 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-kube-api-access-h9ccq" (OuterVolumeSpecName: "kube-api-access-h9ccq") pod "73618e4a-61d4-40e5-bbf7-6b72b7b611c5" (UID: "73618e4a-61d4-40e5-bbf7-6b72b7b611c5"). InnerVolumeSpecName "kube-api-access-h9ccq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.290362 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73618e4a-61d4-40e5-bbf7-6b72b7b611c5" (UID: "73618e4a-61d4-40e5-bbf7-6b72b7b611c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.364794 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9ccq\" (UniqueName: \"kubernetes.io/projected/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-kube-api-access-h9ccq\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.364829 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73618e4a-61d4-40e5-bbf7-6b72b7b611c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.717113 4858 generic.go:334] "Generic (PLEG): container finished" podID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerID="316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd" exitCode=0 Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.717199 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65pb8" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.717232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65pb8" event={"ID":"73618e4a-61d4-40e5-bbf7-6b72b7b611c5","Type":"ContainerDied","Data":"316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd"} Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.717483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65pb8" event={"ID":"73618e4a-61d4-40e5-bbf7-6b72b7b611c5","Type":"ContainerDied","Data":"371359c6c12890d809727a901cc30dba5d314e3def5bd9177e7e24b8c4a6ee1c"} Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.717508 4858 scope.go:117] "RemoveContainer" containerID="316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.751647 4858 scope.go:117] "RemoveContainer" containerID="274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.753154 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-65pb8"] Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.765797 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-65pb8"] Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.782170 4858 scope.go:117] "RemoveContainer" containerID="d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.839776 4858 scope.go:117] "RemoveContainer" containerID="316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd" Jan 27 21:56:17 crc kubenswrapper[4858]: E0127 21:56:17.840383 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd\": container with ID starting with 316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd not found: ID does not exist" containerID="316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.840438 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd"} err="failed to get container status \"316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd\": rpc error: code = NotFound desc = could not find container \"316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd\": container with ID starting with 316a2b946e9dda17644620d36727718880827df1ccd60c3fadc010c45bf9f1fd not found: ID does not exist" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.840471 4858 scope.go:117] "RemoveContainer" containerID="274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908" Jan 27 21:56:17 crc kubenswrapper[4858]: E0127 21:56:17.843157 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908\": container with ID starting with 274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908 not found: ID does not exist" containerID="274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.843215 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908"} err="failed to get container status \"274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908\": rpc error: code = NotFound desc = could not find container \"274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908\": container with ID starting with 274d4829477160fa78157aaf5a37acae530736a5993e7b0c309cfa6765845908 not found: ID does not exist" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.843248 4858 scope.go:117] "RemoveContainer" containerID="d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f" Jan 27 21:56:17 crc kubenswrapper[4858]: E0127 21:56:17.843748 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f\": container with ID starting with d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f not found: ID does not exist" containerID="d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f" Jan 27 21:56:17 crc kubenswrapper[4858]: I0127 21:56:17.843797 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f"} err="failed to get container status \"d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f\": rpc error: code = NotFound desc = could not find container \"d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f\": container with ID starting with d44c93f138d48b973aa944fc04f933d93488880cd3309e5a7725e9fa5e1aa96f not found: ID does not exist" Jan 27 21:56:18 crc kubenswrapper[4858]: I0127 21:56:18.081872 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" path="/var/lib/kubelet/pods/73618e4a-61d4-40e5-bbf7-6b72b7b611c5/volumes" Jan 27 21:56:19 crc kubenswrapper[4858]: I0127 21:56:19.700945 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bdznk_35e8e577-768b-425e-ae5e-74f9f4710566/prometheus-operator/0.log" Jan 27 21:56:19 crc kubenswrapper[4858]: I0127 21:56:19.866219 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh_c4c617c2-8b14-4e9c-8a40-ab1353beeb33/prometheus-operator-admission-webhook/0.log" Jan 27 21:56:19 crc kubenswrapper[4858]: I0127 21:56:19.933354 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-vk825_812a6b90-9a07-4f7f-864d-baa13b5ab210/prometheus-operator-admission-webhook/0.log" Jan 27 21:56:20 crc kubenswrapper[4858]: I0127 21:56:20.071648 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:56:20 crc kubenswrapper[4858]: E0127 21:56:20.072009 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:56:20 crc kubenswrapper[4858]: I0127 21:56:20.092446 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dj5bj_40809707-fd14-4599-a0ac-0bcb0c90661d/operator/0.log" Jan 27 21:56:20 crc kubenswrapper[4858]: I0127 21:56:20.136024 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nfc2q_3c0cbb64-d018-496a-a983-8c4761f142ed/perses-operator/0.log" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.342380 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z42rg"] Jan 27 21:56:23 crc kubenswrapper[4858]: E0127 21:56:23.343562 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="extract-content" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.343581 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="extract-content" Jan 27 21:56:23 crc kubenswrapper[4858]: E0127 21:56:23.343623 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="registry-server" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.343634 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="registry-server" Jan 27 21:56:23 crc kubenswrapper[4858]: E0127 21:56:23.343658 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="extract-utilities" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.343666 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="extract-utilities" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.343907 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="73618e4a-61d4-40e5-bbf7-6b72b7b611c5" containerName="registry-server" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.345636 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.361982 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z42rg"] Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.495157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-utilities\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.495274 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-catalog-content\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.495417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9mcs\" (UniqueName: \"kubernetes.io/projected/f256dd82-71d1-4488-a8b1-d64d669dbe7c-kube-api-access-p9mcs\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.597916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-utilities\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.597986 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-catalog-content\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.598028 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9mcs\" (UniqueName: \"kubernetes.io/projected/f256dd82-71d1-4488-a8b1-d64d669dbe7c-kube-api-access-p9mcs\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.598570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-utilities\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.598622 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-catalog-content\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.655720 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9mcs\" (UniqueName: \"kubernetes.io/projected/f256dd82-71d1-4488-a8b1-d64d669dbe7c-kube-api-access-p9mcs\") pod \"community-operators-z42rg\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:23 crc kubenswrapper[4858]: I0127 21:56:23.668448 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:24 crc kubenswrapper[4858]: I0127 21:56:24.226376 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z42rg"] Jan 27 21:56:24 crc kubenswrapper[4858]: I0127 21:56:24.812725 4858 generic.go:334] "Generic (PLEG): container finished" podID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerID="bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1" exitCode=0 Jan 27 21:56:24 crc kubenswrapper[4858]: I0127 21:56:24.813028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z42rg" event={"ID":"f256dd82-71d1-4488-a8b1-d64d669dbe7c","Type":"ContainerDied","Data":"bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1"} Jan 27 21:56:24 crc kubenswrapper[4858]: I0127 21:56:24.813063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z42rg" event={"ID":"f256dd82-71d1-4488-a8b1-d64d669dbe7c","Type":"ContainerStarted","Data":"c46786b172b3a150b99a84e6c6c9258f65f70627a3efab2bc285dbab9228de14"} Jan 27 21:56:25 crc kubenswrapper[4858]: I0127 21:56:25.824831 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z42rg" event={"ID":"f256dd82-71d1-4488-a8b1-d64d669dbe7c","Type":"ContainerStarted","Data":"5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8"} Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.360667 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gzh8k"] Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.366266 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.375329 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzh8k"] Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.467733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-utilities\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.467836 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-catalog-content\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.468095 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n89k\" (UniqueName: \"kubernetes.io/projected/e3471a68-0604-4e2d-8a76-b560eee0aae3-kube-api-access-6n89k\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.569620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-catalog-content\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.569751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n89k\" (UniqueName: \"kubernetes.io/projected/e3471a68-0604-4e2d-8a76-b560eee0aae3-kube-api-access-6n89k\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.569889 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-utilities\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.570492 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-utilities\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.570541 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-catalog-content\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.593809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n89k\" (UniqueName: \"kubernetes.io/projected/e3471a68-0604-4e2d-8a76-b560eee0aae3-kube-api-access-6n89k\") pod \"redhat-operators-gzh8k\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:26 crc kubenswrapper[4858]: I0127 21:56:26.692588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:27 crc kubenswrapper[4858]: I0127 21:56:27.273354 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzh8k"] Jan 27 21:56:27 crc kubenswrapper[4858]: W0127 21:56:27.280921 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3471a68_0604_4e2d_8a76_b560eee0aae3.slice/crio-6378024baabadd6aa9724579f7d24a3558071caaeea1b0e89be53eacdd2dea58 WatchSource:0}: Error finding container 6378024baabadd6aa9724579f7d24a3558071caaeea1b0e89be53eacdd2dea58: Status 404 returned error can't find the container with id 6378024baabadd6aa9724579f7d24a3558071caaeea1b0e89be53eacdd2dea58 Jan 27 21:56:27 crc kubenswrapper[4858]: I0127 21:56:27.849130 4858 generic.go:334] "Generic (PLEG): container finished" podID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerID="1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287" exitCode=0 Jan 27 21:56:27 crc kubenswrapper[4858]: I0127 21:56:27.849188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzh8k" event={"ID":"e3471a68-0604-4e2d-8a76-b560eee0aae3","Type":"ContainerDied","Data":"1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287"} Jan 27 21:56:27 crc kubenswrapper[4858]: I0127 21:56:27.849809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzh8k" event={"ID":"e3471a68-0604-4e2d-8a76-b560eee0aae3","Type":"ContainerStarted","Data":"6378024baabadd6aa9724579f7d24a3558071caaeea1b0e89be53eacdd2dea58"} Jan 27 21:56:27 crc kubenswrapper[4858]: I0127 21:56:27.852405 4858 generic.go:334] "Generic (PLEG): container finished" podID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerID="5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8" exitCode=0 Jan 27 21:56:27 crc kubenswrapper[4858]: I0127 21:56:27.852468 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z42rg" event={"ID":"f256dd82-71d1-4488-a8b1-d64d669dbe7c","Type":"ContainerDied","Data":"5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8"} Jan 27 21:56:28 crc kubenswrapper[4858]: I0127 21:56:28.865311 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z42rg" event={"ID":"f256dd82-71d1-4488-a8b1-d64d669dbe7c","Type":"ContainerStarted","Data":"d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315"} Jan 27 21:56:28 crc kubenswrapper[4858]: I0127 21:56:28.885959 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z42rg" podStartSLOduration=2.444581491 podStartE2EDuration="5.885937085s" podCreationTimestamp="2026-01-27 21:56:23 +0000 UTC" firstStartedPulling="2026-01-27 21:56:24.814782009 +0000 UTC m=+6529.522597715" lastFinishedPulling="2026-01-27 21:56:28.256137603 +0000 UTC m=+6532.963953309" observedRunningTime="2026-01-27 21:56:28.885002818 +0000 UTC m=+6533.592818534" watchObservedRunningTime="2026-01-27 21:56:28.885937085 +0000 UTC m=+6533.593752791" Jan 27 21:56:29 crc kubenswrapper[4858]: I0127 21:56:29.876203 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzh8k" event={"ID":"e3471a68-0604-4e2d-8a76-b560eee0aae3","Type":"ContainerStarted","Data":"4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3"} Jan 27 21:56:31 crc kubenswrapper[4858]: I0127 21:56:31.070732 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:56:31 crc kubenswrapper[4858]: E0127 21:56:31.071295 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:56:33 crc kubenswrapper[4858]: I0127 21:56:33.675591 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:33 crc kubenswrapper[4858]: I0127 21:56:33.675963 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:33 crc kubenswrapper[4858]: I0127 21:56:33.736947 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:33 crc kubenswrapper[4858]: I0127 21:56:33.925136 4858 generic.go:334] "Generic (PLEG): container finished" podID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerID="4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3" exitCode=0 Jan 27 21:56:33 crc kubenswrapper[4858]: I0127 21:56:33.925300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzh8k" event={"ID":"e3471a68-0604-4e2d-8a76-b560eee0aae3","Type":"ContainerDied","Data":"4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3"} Jan 27 21:56:33 crc kubenswrapper[4858]: I0127 21:56:33.976178 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:34 crc kubenswrapper[4858]: I0127 21:56:34.900160 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-flgg7_12b83edf-4de2-4aa1-8dcd-147782a08fd4/kube-rbac-proxy/0.log" Jan 27 21:56:34 crc kubenswrapper[4858]: I0127 21:56:34.989515 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-flgg7_12b83edf-4de2-4aa1-8dcd-147782a08fd4/controller/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.142436 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-k8wd8_131f384a-33a7-421b-be46-51d5561a6e98/frr-k8s-webhook-server/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.213475 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.458521 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.477115 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.477253 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.494746 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.707857 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.714927 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.719149 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.752194 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.951420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzh8k" event={"ID":"e3471a68-0604-4e2d-8a76-b560eee0aae3","Type":"ContainerStarted","Data":"4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b"} Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.971822 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-metrics/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.976587 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gzh8k" podStartSLOduration=3.041246312 podStartE2EDuration="9.976571066s" podCreationTimestamp="2026-01-27 21:56:26 +0000 UTC" firstStartedPulling="2026-01-27 21:56:27.850900558 +0000 UTC m=+6532.558716264" lastFinishedPulling="2026-01-27 21:56:34.786225312 +0000 UTC m=+6539.494041018" observedRunningTime="2026-01-27 21:56:35.971647476 +0000 UTC m=+6540.679463202" watchObservedRunningTime="2026-01-27 21:56:35.976571066 +0000 UTC m=+6540.684386772" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.990865 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-reloader/0.log" Jan 27 21:56:35 crc kubenswrapper[4858]: I0127 21:56:35.998136 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/cp-frr-files/0.log" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.013535 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/controller/0.log" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.201814 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/frr-metrics/0.log" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.216813 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/kube-rbac-proxy/0.log" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.300483 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/kube-rbac-proxy-frr/0.log" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.458267 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/reloader/0.log" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.590733 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c967b4747-92zgn_77b99589-63b5-4df6-b9b7-fc5335eb3463/manager/0.log" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.693572 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.693607 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:36 crc kubenswrapper[4858]: I0127 21:56:36.725591 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c6d8d9f7d-qxppt_079958dc-db6c-480e-90bd-1771c1c404b2/webhook-server/0.log" Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.013783 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hw6th_ca2b2ed3-9750-407d-b919-fd5c6e060e0b/kube-rbac-proxy/0.log" Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.334334 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z42rg"] Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.335178 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z42rg" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="registry-server" containerID="cri-o://d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315" gracePeriod=2 Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.658463 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hw6th_ca2b2ed3-9750-407d-b919-fd5c6e060e0b/speaker/0.log" Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.748367 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzh8k" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="registry-server" probeResult="failure" output=< Jan 27 21:56:37 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Jan 27 21:56:37 crc kubenswrapper[4858]: > Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.921528 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.996318 4858 generic.go:334] "Generic (PLEG): container finished" podID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerID="d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315" exitCode=0 Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.996367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z42rg" event={"ID":"f256dd82-71d1-4488-a8b1-d64d669dbe7c","Type":"ContainerDied","Data":"d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315"} Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.996399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z42rg" event={"ID":"f256dd82-71d1-4488-a8b1-d64d669dbe7c","Type":"ContainerDied","Data":"c46786b172b3a150b99a84e6c6c9258f65f70627a3efab2bc285dbab9228de14"} Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.996421 4858 scope.go:117] "RemoveContainer" containerID="d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315" Jan 27 21:56:37 crc kubenswrapper[4858]: I0127 21:56:37.996443 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z42rg" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.030541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9mcs\" (UniqueName: \"kubernetes.io/projected/f256dd82-71d1-4488-a8b1-d64d669dbe7c-kube-api-access-p9mcs\") pod \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.032831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-utilities\") pod \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.032868 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-catalog-content\") pod \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\" (UID: \"f256dd82-71d1-4488-a8b1-d64d669dbe7c\") " Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.034758 4858 scope.go:117] "RemoveContainer" containerID="5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.035064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-utilities" (OuterVolumeSpecName: "utilities") pod "f256dd82-71d1-4488-a8b1-d64d669dbe7c" (UID: "f256dd82-71d1-4488-a8b1-d64d669dbe7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.035960 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.060794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f256dd82-71d1-4488-a8b1-d64d669dbe7c-kube-api-access-p9mcs" (OuterVolumeSpecName: "kube-api-access-p9mcs") pod "f256dd82-71d1-4488-a8b1-d64d669dbe7c" (UID: "f256dd82-71d1-4488-a8b1-d64d669dbe7c"). InnerVolumeSpecName "kube-api-access-p9mcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.085863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f256dd82-71d1-4488-a8b1-d64d669dbe7c" (UID: "f256dd82-71d1-4488-a8b1-d64d669dbe7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.102585 4858 scope.go:117] "RemoveContainer" containerID="bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.138918 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9mcs\" (UniqueName: \"kubernetes.io/projected/f256dd82-71d1-4488-a8b1-d64d669dbe7c-kube-api-access-p9mcs\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.138961 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f256dd82-71d1-4488-a8b1-d64d669dbe7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.167139 4858 scope.go:117] "RemoveContainer" containerID="d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315" Jan 27 21:56:38 crc kubenswrapper[4858]: E0127 21:56:38.167671 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315\": container with ID starting with d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315 not found: ID does not exist" containerID="d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.167702 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315"} err="failed to get container status \"d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315\": rpc error: code = NotFound desc = could not find container \"d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315\": container with ID starting with d8ba855c54ed410e2a5c5497a6df833aadd5809a8b145a259421de252fe59315 not found: ID does not exist" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.167724 4858 scope.go:117] "RemoveContainer" containerID="5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8" Jan 27 21:56:38 crc kubenswrapper[4858]: E0127 21:56:38.168088 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8\": container with ID starting with 5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8 not found: ID does not exist" containerID="5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.168128 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8"} err="failed to get container status \"5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8\": rpc error: code = NotFound desc = could not find container \"5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8\": container with ID starting with 5d226794add9f40efc4514e0d5661528e5f7cea0ca25ec81cd1588df9eadf9a8 not found: ID does not exist" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.168154 4858 scope.go:117] "RemoveContainer" containerID="bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1" Jan 27 21:56:38 crc kubenswrapper[4858]: E0127 21:56:38.168526 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1\": container with ID starting with bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1 not found: ID does not exist" containerID="bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.168561 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1"} err="failed to get container status \"bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1\": rpc error: code = NotFound desc = could not find container \"bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1\": container with ID starting with bb8d2a146964352fcf8bfd857f4c2c2bbe22fe18ed2e378d3fa290a0b62a82b1 not found: ID does not exist" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.185773 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zr8cd_69b6591f-5854-4205-8af5-da752f5006ab/frr/0.log" Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.319688 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z42rg"] Jan 27 21:56:38 crc kubenswrapper[4858]: I0127 21:56:38.329339 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z42rg"] Jan 27 21:56:40 crc kubenswrapper[4858]: I0127 21:56:40.084111 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" path="/var/lib/kubelet/pods/f256dd82-71d1-4488-a8b1-d64d669dbe7c/volumes" Jan 27 21:56:46 crc kubenswrapper[4858]: I0127 21:56:46.079657 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:56:46 crc kubenswrapper[4858]: E0127 21:56:46.081337 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:56:46 crc kubenswrapper[4858]: I0127 21:56:46.743245 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:46 crc kubenswrapper[4858]: I0127 21:56:46.800169 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:46 crc kubenswrapper[4858]: I0127 21:56:46.984924 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzh8k"] Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.096619 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gzh8k" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="registry-server" containerID="cri-o://4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b" gracePeriod=2 Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.635863 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.762417 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-catalog-content\") pod \"e3471a68-0604-4e2d-8a76-b560eee0aae3\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.762527 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-utilities\") pod \"e3471a68-0604-4e2d-8a76-b560eee0aae3\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.762657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n89k\" (UniqueName: \"kubernetes.io/projected/e3471a68-0604-4e2d-8a76-b560eee0aae3-kube-api-access-6n89k\") pod \"e3471a68-0604-4e2d-8a76-b560eee0aae3\" (UID: \"e3471a68-0604-4e2d-8a76-b560eee0aae3\") " Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.763909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-utilities" (OuterVolumeSpecName: "utilities") pod "e3471a68-0604-4e2d-8a76-b560eee0aae3" (UID: "e3471a68-0604-4e2d-8a76-b560eee0aae3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.771746 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3471a68-0604-4e2d-8a76-b560eee0aae3-kube-api-access-6n89k" (OuterVolumeSpecName: "kube-api-access-6n89k") pod "e3471a68-0604-4e2d-8a76-b560eee0aae3" (UID: "e3471a68-0604-4e2d-8a76-b560eee0aae3"). InnerVolumeSpecName "kube-api-access-6n89k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.865021 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.865255 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n89k\" (UniqueName: \"kubernetes.io/projected/e3471a68-0604-4e2d-8a76-b560eee0aae3-kube-api-access-6n89k\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.943199 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3471a68-0604-4e2d-8a76-b560eee0aae3" (UID: "e3471a68-0604-4e2d-8a76-b560eee0aae3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:56:48 crc kubenswrapper[4858]: I0127 21:56:48.967467 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3471a68-0604-4e2d-8a76-b560eee0aae3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.112881 4858 generic.go:334] "Generic (PLEG): container finished" podID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerID="4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b" exitCode=0 Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.112925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzh8k" event={"ID":"e3471a68-0604-4e2d-8a76-b560eee0aae3","Type":"ContainerDied","Data":"4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b"} Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.112962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzh8k" event={"ID":"e3471a68-0604-4e2d-8a76-b560eee0aae3","Type":"ContainerDied","Data":"6378024baabadd6aa9724579f7d24a3558071caaeea1b0e89be53eacdd2dea58"} Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.112984 4858 scope.go:117] "RemoveContainer" containerID="4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.113012 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzh8k" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.145362 4858 scope.go:117] "RemoveContainer" containerID="4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.148485 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzh8k"] Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.156586 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gzh8k"] Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.178051 4858 scope.go:117] "RemoveContainer" containerID="1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.238518 4858 scope.go:117] "RemoveContainer" containerID="4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b" Jan 27 21:56:49 crc kubenswrapper[4858]: E0127 21:56:49.239055 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b\": container with ID starting with 4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b not found: ID does not exist" containerID="4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.239122 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b"} err="failed to get container status \"4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b\": rpc error: code = NotFound desc = could not find container \"4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b\": container with ID starting with 4f3ae53d2e43b469cf1385370a3847eba45e840590cb29e18155a22b2d3fef8b not found: ID does not exist" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.239152 4858 scope.go:117] "RemoveContainer" containerID="4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3" Jan 27 21:56:49 crc kubenswrapper[4858]: E0127 21:56:49.239668 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3\": container with ID starting with 4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3 not found: ID does not exist" containerID="4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.239714 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3"} err="failed to get container status \"4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3\": rpc error: code = NotFound desc = could not find container \"4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3\": container with ID starting with 4233efc6853bb2a50aca841390ebca21f5402ed60b55c9bce27eafcb43108bb3 not found: ID does not exist" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.239734 4858 scope.go:117] "RemoveContainer" containerID="1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287" Jan 27 21:56:49 crc kubenswrapper[4858]: E0127 21:56:49.240052 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287\": container with ID starting with 1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287 not found: ID does not exist" containerID="1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287" Jan 27 21:56:49 crc kubenswrapper[4858]: I0127 21:56:49.240093 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287"} err="failed to get container status \"1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287\": rpc error: code = NotFound desc = could not find container \"1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287\": container with ID starting with 1aec661c9b4f9ae888b5935ce54d0aaab14aa475aa286e7a0fccc012133d6287 not found: ID does not exist" Jan 27 21:56:50 crc kubenswrapper[4858]: I0127 21:56:50.085881 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" path="/var/lib/kubelet/pods/e3471a68-0604-4e2d-8a76-b560eee0aae3/volumes" Jan 27 21:56:51 crc kubenswrapper[4858]: I0127 21:56:51.685973 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/util/0.log" Jan 27 21:56:51 crc kubenswrapper[4858]: I0127 21:56:51.951093 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/util/0.log" Jan 27 21:56:51 crc kubenswrapper[4858]: I0127 21:56:51.965305 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/pull/0.log" Jan 27 21:56:51 crc kubenswrapper[4858]: I0127 21:56:51.973670 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/pull/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.185011 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/extract/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.185142 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/pull/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.200804 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2lzhw_86bd2beb-9d03-402c-bb7a-0ee191fa9f8d/util/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.326188 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/util/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.527762 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/pull/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.549784 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/pull/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.551607 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/util/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.694853 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/util/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.700156 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/pull/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.717654 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fkbv2_40b2dccc-b57f-4ea5-a83b-e5ff8e1f2da5/extract/0.log" Jan 27 21:56:52 crc kubenswrapper[4858]: I0127 21:56:52.891974 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/util/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.074267 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/pull/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.080977 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/pull/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.112805 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/util/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.249035 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/util/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.283359 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/extract/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.331760 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j28dt_2e3ed7e2-5e6e-416a-abe2-9d97a0b9f627/pull/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.433780 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-utilities/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.581150 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-content/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.604162 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-content/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.609035 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-utilities/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.812963 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-utilities/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.847859 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/extract-content/0.log" Jan 27 21:56:53 crc kubenswrapper[4858]: I0127 21:56:53.996486 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-utilities/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.261455 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-content/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.357313 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-utilities/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.384318 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-content/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.473839 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6x7b7_72c982d1-53e2-49e0-88ee-e6807485e9dc/registry-server/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.535645 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-utilities/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.540093 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/extract-content/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.784398 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qlw92_35d290ca-2486-41c6-9a0e-0b905e2994bb/marketplace-operator/0.log" Jan 27 21:56:54 crc kubenswrapper[4858]: I0127 21:56:54.986800 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-utilities/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.231405 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-content/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.243585 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-utilities/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.342010 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-content/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.504578 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-content/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.504694 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/extract-utilities/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.610104 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xgbcp_43f8439c-4c71-4ed1-b4db-462915af0785/registry-server/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.720469 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-utilities/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.872578 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6md42_e59c3191-f721-45cc-b1d2-2e6bd8bc0797/registry-server/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.942133 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-utilities/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.944898 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-content/0.log" Jan 27 21:56:55 crc kubenswrapper[4858]: I0127 21:56:55.976392 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-content/0.log" Jan 27 21:56:56 crc kubenswrapper[4858]: I0127 21:56:56.129651 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-utilities/0.log" Jan 27 21:56:56 crc kubenswrapper[4858]: I0127 21:56:56.177229 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/extract-content/0.log" Jan 27 21:56:57 crc kubenswrapper[4858]: I0127 21:56:57.071504 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:56:57 crc kubenswrapper[4858]: E0127 21:56:57.072205 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:56:57 crc kubenswrapper[4858]: I0127 21:56:57.106499 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jfrzg_b7c34e1e-7f03-4b35-8a78-38432c88a885/registry-server/0.log" Jan 27 21:57:10 crc kubenswrapper[4858]: I0127 21:57:10.333587 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-mx8qh_c4c617c2-8b14-4e9c-8a40-ab1353beeb33/prometheus-operator-admission-webhook/0.log" Jan 27 21:57:10 crc kubenswrapper[4858]: I0127 21:57:10.343870 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-bdznk_35e8e577-768b-425e-ae5e-74f9f4710566/prometheus-operator/0.log" Jan 27 21:57:10 crc kubenswrapper[4858]: I0127 21:57:10.385995 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57c849b6b8-vk825_812a6b90-9a07-4f7f-864d-baa13b5ab210/prometheus-operator-admission-webhook/0.log" Jan 27 21:57:10 crc kubenswrapper[4858]: I0127 21:57:10.541910 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-nfc2q_3c0cbb64-d018-496a-a983-8c4761f142ed/perses-operator/0.log" Jan 27 21:57:10 crc kubenswrapper[4858]: I0127 21:57:10.556529 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dj5bj_40809707-fd14-4599-a0ac-0bcb0c90661d/operator/0.log" Jan 27 21:57:12 crc kubenswrapper[4858]: I0127 21:57:12.071642 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:57:12 crc kubenswrapper[4858]: E0127 21:57:12.072058 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:57:24 crc kubenswrapper[4858]: I0127 21:57:24.071674 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:57:24 crc kubenswrapper[4858]: E0127 21:57:24.072373 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:57:38 crc kubenswrapper[4858]: I0127 21:57:38.070770 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:57:38 crc kubenswrapper[4858]: E0127 21:57:38.071514 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:57:53 crc kubenswrapper[4858]: I0127 21:57:53.071225 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:57:53 crc kubenswrapper[4858]: E0127 21:57:53.072005 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:58:08 crc kubenswrapper[4858]: I0127 21:58:08.080273 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:58:08 crc kubenswrapper[4858]: E0127 21:58:08.082531 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:58:19 crc kubenswrapper[4858]: I0127 21:58:19.071170 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:58:19 crc kubenswrapper[4858]: E0127 21:58:19.072306 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:58:33 crc kubenswrapper[4858]: I0127 21:58:33.071806 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:58:33 crc kubenswrapper[4858]: E0127 21:58:33.072948 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:58:46 crc kubenswrapper[4858]: I0127 21:58:46.079827 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:58:46 crc kubenswrapper[4858]: E0127 21:58:46.080624 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-psxnq_openshift-machine-config-operator(50837e4c-bd24-4b62-b1e7-b586e702bd40)\"" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" Jan 27 21:59:01 crc kubenswrapper[4858]: I0127 21:59:01.073487 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 21:59:01 crc kubenswrapper[4858]: I0127 21:59:01.529371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"ee4cf6e67b2c44879724fafd0d4096cc7ffbcaa8daa108db195da1dc16df08b5"} Jan 27 21:59:17 crc kubenswrapper[4858]: I0127 21:59:17.745054 4858 generic.go:334] "Generic (PLEG): container finished" podID="db47a894-c924-4cfe-b655-7da395bff4b4" containerID="0b01c3f9a7c6131e437e2ded93569bda22a78fb9420ae52b0f97ea1c26985254" exitCode=0 Jan 27 21:59:17 crc kubenswrapper[4858]: I0127 21:59:17.745225 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5r5ms/must-gather-m54br" event={"ID":"db47a894-c924-4cfe-b655-7da395bff4b4","Type":"ContainerDied","Data":"0b01c3f9a7c6131e437e2ded93569bda22a78fb9420ae52b0f97ea1c26985254"} Jan 27 21:59:17 crc kubenswrapper[4858]: I0127 21:59:17.746422 4858 scope.go:117] "RemoveContainer" containerID="0b01c3f9a7c6131e437e2ded93569bda22a78fb9420ae52b0f97ea1c26985254" Jan 27 21:59:18 crc kubenswrapper[4858]: I0127 21:59:18.236882 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5r5ms_must-gather-m54br_db47a894-c924-4cfe-b655-7da395bff4b4/gather/0.log" Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.445970 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5r5ms/must-gather-m54br"] Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.447056 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5r5ms/must-gather-m54br" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" containerName="copy" containerID="cri-o://0e3f089703b58543601a876c2c30be13fe8bcb1f2d45a8a355d285ed4b8e6f24" gracePeriod=2 Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.463492 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5r5ms/must-gather-m54br"] Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.883911 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5r5ms_must-gather-m54br_db47a894-c924-4cfe-b655-7da395bff4b4/copy/0.log" Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.884513 4858 generic.go:334] "Generic (PLEG): container finished" podID="db47a894-c924-4cfe-b655-7da395bff4b4" containerID="0e3f089703b58543601a876c2c30be13fe8bcb1f2d45a8a355d285ed4b8e6f24" exitCode=143 Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.884589 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84cce7449a8f45e8321541313fedd6e55d3ff07cd674f0963c3ac3d81ae90d08" Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.898623 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5r5ms_must-gather-m54br_db47a894-c924-4cfe-b655-7da395bff4b4/copy/0.log" Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.899090 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.983074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlhwx\" (UniqueName: \"kubernetes.io/projected/db47a894-c924-4cfe-b655-7da395bff4b4-kube-api-access-xlhwx\") pod \"db47a894-c924-4cfe-b655-7da395bff4b4\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.983256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/db47a894-c924-4cfe-b655-7da395bff4b4-must-gather-output\") pod \"db47a894-c924-4cfe-b655-7da395bff4b4\" (UID: \"db47a894-c924-4cfe-b655-7da395bff4b4\") " Jan 27 21:59:30 crc kubenswrapper[4858]: I0127 21:59:30.994119 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db47a894-c924-4cfe-b655-7da395bff4b4-kube-api-access-xlhwx" (OuterVolumeSpecName: "kube-api-access-xlhwx") pod "db47a894-c924-4cfe-b655-7da395bff4b4" (UID: "db47a894-c924-4cfe-b655-7da395bff4b4"). InnerVolumeSpecName "kube-api-access-xlhwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 21:59:31 crc kubenswrapper[4858]: I0127 21:59:31.087256 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlhwx\" (UniqueName: \"kubernetes.io/projected/db47a894-c924-4cfe-b655-7da395bff4b4-kube-api-access-xlhwx\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:31 crc kubenswrapper[4858]: I0127 21:59:31.215693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db47a894-c924-4cfe-b655-7da395bff4b4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "db47a894-c924-4cfe-b655-7da395bff4b4" (UID: "db47a894-c924-4cfe-b655-7da395bff4b4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 21:59:31 crc kubenswrapper[4858]: I0127 21:59:31.291941 4858 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/db47a894-c924-4cfe-b655-7da395bff4b4-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 21:59:31 crc kubenswrapper[4858]: I0127 21:59:31.892948 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5r5ms/must-gather-m54br" Jan 27 21:59:32 crc kubenswrapper[4858]: I0127 21:59:32.082257 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" path="/var/lib/kubelet/pods/db47a894-c924-4cfe-b655-7da395bff4b4/volumes" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.145442 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5"] Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146448 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="extract-utilities" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146467 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="extract-utilities" Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146487 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" containerName="copy" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146495 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" containerName="copy" Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146513 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="extract-content" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146521 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="extract-content" Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146535 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="registry-server" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146560 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="registry-server" Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146582 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="extract-utilities" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146590 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="extract-utilities" Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146604 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="registry-server" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146611 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="registry-server" Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146624 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="extract-content" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146631 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="extract-content" Jan 27 22:00:00 crc kubenswrapper[4858]: E0127 22:00:00.146656 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" containerName="gather" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146663 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" containerName="gather" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146895 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3471a68-0604-4e2d-8a76-b560eee0aae3" containerName="registry-server" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146917 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" containerName="copy" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146931 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="db47a894-c924-4cfe-b655-7da395bff4b4" containerName="gather" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.146946 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f256dd82-71d1-4488-a8b1-d64d669dbe7c" containerName="registry-server" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.147816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.151048 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.152029 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.176750 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5"] Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.219450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-secret-volume\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.219582 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-config-volume\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.219663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7l4l\" (UniqueName: \"kubernetes.io/projected/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-kube-api-access-k7l4l\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.321409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-secret-volume\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.321499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-config-volume\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.321589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7l4l\" (UniqueName: \"kubernetes.io/projected/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-kube-api-access-k7l4l\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.322749 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-config-volume\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.327523 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-secret-volume\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.342383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7l4l\" (UniqueName: \"kubernetes.io/projected/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-kube-api-access-k7l4l\") pod \"collect-profiles-29492520-qjkl5\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.468849 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:00 crc kubenswrapper[4858]: I0127 22:00:00.951677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5"] Jan 27 22:00:01 crc kubenswrapper[4858]: I0127 22:00:01.204690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" event={"ID":"7d9387ba-0e82-4ebc-a8aa-04ab91316cde","Type":"ContainerStarted","Data":"b4dfcb63a08f1d93abe19bb3fa4fc91f47777970790340a5b64ea0e9feba6554"} Jan 27 22:00:01 crc kubenswrapper[4858]: I0127 22:00:01.204744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" event={"ID":"7d9387ba-0e82-4ebc-a8aa-04ab91316cde","Type":"ContainerStarted","Data":"17455f79428d6ab7e13b747463850ff86cb4f473da937f8ee8d3fc3352332214"} Jan 27 22:00:01 crc kubenswrapper[4858]: I0127 22:00:01.218014 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" podStartSLOduration=1.217993232 podStartE2EDuration="1.217993232s" podCreationTimestamp="2026-01-27 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:00:01.217319293 +0000 UTC m=+6745.925135009" watchObservedRunningTime="2026-01-27 22:00:01.217993232 +0000 UTC m=+6745.925808968" Jan 27 22:00:02 crc kubenswrapper[4858]: I0127 22:00:02.217688 4858 generic.go:334] "Generic (PLEG): container finished" podID="7d9387ba-0e82-4ebc-a8aa-04ab91316cde" containerID="b4dfcb63a08f1d93abe19bb3fa4fc91f47777970790340a5b64ea0e9feba6554" exitCode=0 Jan 27 22:00:02 crc kubenswrapper[4858]: I0127 22:00:02.217741 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" event={"ID":"7d9387ba-0e82-4ebc-a8aa-04ab91316cde","Type":"ContainerDied","Data":"b4dfcb63a08f1d93abe19bb3fa4fc91f47777970790340a5b64ea0e9feba6554"} Jan 27 22:00:02 crc kubenswrapper[4858]: I0127 22:00:02.766642 4858 scope.go:117] "RemoveContainer" containerID="0e3f089703b58543601a876c2c30be13fe8bcb1f2d45a8a355d285ed4b8e6f24" Jan 27 22:00:02 crc kubenswrapper[4858]: I0127 22:00:02.786891 4858 scope.go:117] "RemoveContainer" containerID="0b01c3f9a7c6131e437e2ded93569bda22a78fb9420ae52b0f97ea1c26985254" Jan 27 22:00:03 crc kubenswrapper[4858]: I0127 22:00:03.827844 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:03 crc kubenswrapper[4858]: I0127 22:00:03.917737 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-secret-volume\") pod \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " Jan 27 22:00:03 crc kubenswrapper[4858]: I0127 22:00:03.917776 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-config-volume\") pod \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " Jan 27 22:00:03 crc kubenswrapper[4858]: I0127 22:00:03.917831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7l4l\" (UniqueName: \"kubernetes.io/projected/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-kube-api-access-k7l4l\") pod \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\" (UID: \"7d9387ba-0e82-4ebc-a8aa-04ab91316cde\") " Jan 27 22:00:03 crc kubenswrapper[4858]: I0127 22:00:03.919029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d9387ba-0e82-4ebc-a8aa-04ab91316cde" (UID: "7d9387ba-0e82-4ebc-a8aa-04ab91316cde"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 22:00:03 crc kubenswrapper[4858]: I0127 22:00:03.923766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d9387ba-0e82-4ebc-a8aa-04ab91316cde" (UID: "7d9387ba-0e82-4ebc-a8aa-04ab91316cde"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:00:03 crc kubenswrapper[4858]: I0127 22:00:03.925634 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-kube-api-access-k7l4l" (OuterVolumeSpecName: "kube-api-access-k7l4l") pod "7d9387ba-0e82-4ebc-a8aa-04ab91316cde" (UID: "7d9387ba-0e82-4ebc-a8aa-04ab91316cde"). InnerVolumeSpecName "kube-api-access-k7l4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.020996 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.021044 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.021054 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7l4l\" (UniqueName: \"kubernetes.io/projected/7d9387ba-0e82-4ebc-a8aa-04ab91316cde-kube-api-access-k7l4l\") on node \"crc\" DevicePath \"\"" Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.236969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" event={"ID":"7d9387ba-0e82-4ebc-a8aa-04ab91316cde","Type":"ContainerDied","Data":"17455f79428d6ab7e13b747463850ff86cb4f473da937f8ee8d3fc3352332214"} Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.237038 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17455f79428d6ab7e13b747463850ff86cb4f473da937f8ee8d3fc3352332214" Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.237066 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492520-qjkl5" Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.302886 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5"] Jan 27 22:00:04 crc kubenswrapper[4858]: I0127 22:00:04.313585 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492475-29fn5"] Jan 27 22:00:06 crc kubenswrapper[4858]: I0127 22:00:06.084716 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ef9663-c603-418e-976a-4193f1b0f88f" path="/var/lib/kubelet/pods/05ef9663-c603-418e-976a-4193f1b0f88f/volumes" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.202409 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492521-zq2fk"] Jan 27 22:01:00 crc kubenswrapper[4858]: E0127 22:01:00.204635 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9387ba-0e82-4ebc-a8aa-04ab91316cde" containerName="collect-profiles" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.204660 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9387ba-0e82-4ebc-a8aa-04ab91316cde" containerName="collect-profiles" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.205351 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9387ba-0e82-4ebc-a8aa-04ab91316cde" containerName="collect-profiles" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.207263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.216178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-combined-ca-bundle\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.216294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-config-data\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.216332 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-fernet-keys\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.216462 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxjh\" (UniqueName: \"kubernetes.io/projected/9518fcd6-db8b-419a-900c-f70ce904fd25-kube-api-access-zgxjh\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.229038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492521-zq2fk"] Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.318778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-combined-ca-bundle\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.319195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-config-data\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.319380 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-fernet-keys\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.319634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxjh\" (UniqueName: \"kubernetes.io/projected/9518fcd6-db8b-419a-900c-f70ce904fd25-kube-api-access-zgxjh\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.326173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-fernet-keys\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.327067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-combined-ca-bundle\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.328266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-config-data\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.356103 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxjh\" (UniqueName: \"kubernetes.io/projected/9518fcd6-db8b-419a-900c-f70ce904fd25-kube-api-access-zgxjh\") pod \"keystone-cron-29492521-zq2fk\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:00 crc kubenswrapper[4858]: I0127 22:01:00.532637 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:01 crc kubenswrapper[4858]: I0127 22:01:01.002261 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492521-zq2fk"] Jan 27 22:01:01 crc kubenswrapper[4858]: I0127 22:01:01.835526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492521-zq2fk" event={"ID":"9518fcd6-db8b-419a-900c-f70ce904fd25","Type":"ContainerStarted","Data":"b1ec5d224c48270022ba7cfcb51607a3ad79ffc51a8c629e90b9a56da1aeeb4b"} Jan 27 22:01:01 crc kubenswrapper[4858]: I0127 22:01:01.836007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492521-zq2fk" event={"ID":"9518fcd6-db8b-419a-900c-f70ce904fd25","Type":"ContainerStarted","Data":"c5ecde1de7bf1fd3a232add7a4d7463e0843cdbf4577be2f03840a4b1f70d6c3"} Jan 27 22:01:01 crc kubenswrapper[4858]: I0127 22:01:01.866077 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492521-zq2fk" podStartSLOduration=1.866035243 podStartE2EDuration="1.866035243s" podCreationTimestamp="2026-01-27 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 22:01:01.853478075 +0000 UTC m=+6806.561293821" watchObservedRunningTime="2026-01-27 22:01:01.866035243 +0000 UTC m=+6806.573850959" Jan 27 22:01:02 crc kubenswrapper[4858]: I0127 22:01:02.911789 4858 scope.go:117] "RemoveContainer" containerID="6dfec1f5b606621fca16f3d4ee7e619e3e8c28ff884a86504b177e62e09aebf2" Jan 27 22:01:06 crc kubenswrapper[4858]: I0127 22:01:06.053823 4858 generic.go:334] "Generic (PLEG): container finished" podID="9518fcd6-db8b-419a-900c-f70ce904fd25" containerID="b1ec5d224c48270022ba7cfcb51607a3ad79ffc51a8c629e90b9a56da1aeeb4b" exitCode=0 Jan 27 22:01:06 crc kubenswrapper[4858]: I0127 22:01:06.053886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492521-zq2fk" event={"ID":"9518fcd6-db8b-419a-900c-f70ce904fd25","Type":"ContainerDied","Data":"b1ec5d224c48270022ba7cfcb51607a3ad79ffc51a8c629e90b9a56da1aeeb4b"} Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.491851 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.568105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-fernet-keys\") pod \"9518fcd6-db8b-419a-900c-f70ce904fd25\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.568270 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-config-data\") pod \"9518fcd6-db8b-419a-900c-f70ce904fd25\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.568522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgxjh\" (UniqueName: \"kubernetes.io/projected/9518fcd6-db8b-419a-900c-f70ce904fd25-kube-api-access-zgxjh\") pod \"9518fcd6-db8b-419a-900c-f70ce904fd25\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.568620 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-combined-ca-bundle\") pod \"9518fcd6-db8b-419a-900c-f70ce904fd25\" (UID: \"9518fcd6-db8b-419a-900c-f70ce904fd25\") " Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.575596 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9518fcd6-db8b-419a-900c-f70ce904fd25" (UID: "9518fcd6-db8b-419a-900c-f70ce904fd25"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.581682 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9518fcd6-db8b-419a-900c-f70ce904fd25-kube-api-access-zgxjh" (OuterVolumeSpecName: "kube-api-access-zgxjh") pod "9518fcd6-db8b-419a-900c-f70ce904fd25" (UID: "9518fcd6-db8b-419a-900c-f70ce904fd25"). InnerVolumeSpecName "kube-api-access-zgxjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.615234 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9518fcd6-db8b-419a-900c-f70ce904fd25" (UID: "9518fcd6-db8b-419a-900c-f70ce904fd25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.635317 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-config-data" (OuterVolumeSpecName: "config-data") pod "9518fcd6-db8b-419a-900c-f70ce904fd25" (UID: "9518fcd6-db8b-419a-900c-f70ce904fd25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.671213 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.671249 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.671258 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9518fcd6-db8b-419a-900c-f70ce904fd25-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:07 crc kubenswrapper[4858]: I0127 22:01:07.671267 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgxjh\" (UniqueName: \"kubernetes.io/projected/9518fcd6-db8b-419a-900c-f70ce904fd25-kube-api-access-zgxjh\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:08 crc kubenswrapper[4858]: I0127 22:01:08.077501 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492521-zq2fk" Jan 27 22:01:08 crc kubenswrapper[4858]: I0127 22:01:08.088012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492521-zq2fk" event={"ID":"9518fcd6-db8b-419a-900c-f70ce904fd25","Type":"ContainerDied","Data":"c5ecde1de7bf1fd3a232add7a4d7463e0843cdbf4577be2f03840a4b1f70d6c3"} Jan 27 22:01:08 crc kubenswrapper[4858]: I0127 22:01:08.088085 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ecde1de7bf1fd3a232add7a4d7463e0843cdbf4577be2f03840a4b1f70d6c3" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.253298 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fmd8t"] Jan 27 22:01:21 crc kubenswrapper[4858]: E0127 22:01:21.254496 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9518fcd6-db8b-419a-900c-f70ce904fd25" containerName="keystone-cron" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.254512 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9518fcd6-db8b-419a-900c-f70ce904fd25" containerName="keystone-cron" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.254805 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9518fcd6-db8b-419a-900c-f70ce904fd25" containerName="keystone-cron" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.257601 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.271466 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmd8t"] Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.291702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-utilities\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.291781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl4bp\" (UniqueName: \"kubernetes.io/projected/71199c36-5fcb-4c1d-b225-1b52713f813f-kube-api-access-jl4bp\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.291926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-catalog-content\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.393525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-utilities\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.393860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl4bp\" (UniqueName: \"kubernetes.io/projected/71199c36-5fcb-4c1d-b225-1b52713f813f-kube-api-access-jl4bp\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.394069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-utilities\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.394168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-catalog-content\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.394524 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-catalog-content\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.417818 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl4bp\" (UniqueName: \"kubernetes.io/projected/71199c36-5fcb-4c1d-b225-1b52713f813f-kube-api-access-jl4bp\") pod \"certified-operators-fmd8t\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:21 crc kubenswrapper[4858]: I0127 22:01:21.582703 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:22 crc kubenswrapper[4858]: I0127 22:01:22.100864 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fmd8t"] Jan 27 22:01:22 crc kubenswrapper[4858]: W0127 22:01:22.110431 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71199c36_5fcb_4c1d_b225_1b52713f813f.slice/crio-e70815d64cb5ba56b81c89ca31e2f735e1355e23c9eba99cc56ac5330487373c WatchSource:0}: Error finding container e70815d64cb5ba56b81c89ca31e2f735e1355e23c9eba99cc56ac5330487373c: Status 404 returned error can't find the container with id e70815d64cb5ba56b81c89ca31e2f735e1355e23c9eba99cc56ac5330487373c Jan 27 22:01:22 crc kubenswrapper[4858]: I0127 22:01:22.217318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmd8t" event={"ID":"71199c36-5fcb-4c1d-b225-1b52713f813f","Type":"ContainerStarted","Data":"e70815d64cb5ba56b81c89ca31e2f735e1355e23c9eba99cc56ac5330487373c"} Jan 27 22:01:23 crc kubenswrapper[4858]: I0127 22:01:23.232981 4858 generic.go:334] "Generic (PLEG): container finished" podID="71199c36-5fcb-4c1d-b225-1b52713f813f" containerID="1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898" exitCode=0 Jan 27 22:01:23 crc kubenswrapper[4858]: I0127 22:01:23.233132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmd8t" event={"ID":"71199c36-5fcb-4c1d-b225-1b52713f813f","Type":"ContainerDied","Data":"1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898"} Jan 27 22:01:23 crc kubenswrapper[4858]: I0127 22:01:23.238209 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 22:01:25 crc kubenswrapper[4858]: I0127 22:01:25.256290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmd8t" event={"ID":"71199c36-5fcb-4c1d-b225-1b52713f813f","Type":"ContainerStarted","Data":"6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881"} Jan 27 22:01:26 crc kubenswrapper[4858]: I0127 22:01:26.269882 4858 generic.go:334] "Generic (PLEG): container finished" podID="71199c36-5fcb-4c1d-b225-1b52713f813f" containerID="6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881" exitCode=0 Jan 27 22:01:26 crc kubenswrapper[4858]: I0127 22:01:26.269954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmd8t" event={"ID":"71199c36-5fcb-4c1d-b225-1b52713f813f","Type":"ContainerDied","Data":"6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881"} Jan 27 22:01:27 crc kubenswrapper[4858]: I0127 22:01:27.284908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmd8t" event={"ID":"71199c36-5fcb-4c1d-b225-1b52713f813f","Type":"ContainerStarted","Data":"fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658"} Jan 27 22:01:27 crc kubenswrapper[4858]: I0127 22:01:27.323338 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fmd8t" podStartSLOduration=2.89613134 podStartE2EDuration="6.32331116s" podCreationTimestamp="2026-01-27 22:01:21 +0000 UTC" firstStartedPulling="2026-01-27 22:01:23.237898719 +0000 UTC m=+6827.945714425" lastFinishedPulling="2026-01-27 22:01:26.665078539 +0000 UTC m=+6831.372894245" observedRunningTime="2026-01-27 22:01:27.310117165 +0000 UTC m=+6832.017932871" watchObservedRunningTime="2026-01-27 22:01:27.32331116 +0000 UTC m=+6832.031126896" Jan 27 22:01:29 crc kubenswrapper[4858]: I0127 22:01:29.329017 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:01:29 crc kubenswrapper[4858]: I0127 22:01:29.329620 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:01:31 crc kubenswrapper[4858]: I0127 22:01:31.583314 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:31 crc kubenswrapper[4858]: I0127 22:01:31.583748 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:31 crc kubenswrapper[4858]: I0127 22:01:31.629605 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:32 crc kubenswrapper[4858]: I0127 22:01:32.390623 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:32 crc kubenswrapper[4858]: I0127 22:01:32.444179 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmd8t"] Jan 27 22:01:34 crc kubenswrapper[4858]: I0127 22:01:34.354903 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fmd8t" podUID="71199c36-5fcb-4c1d-b225-1b52713f813f" containerName="registry-server" containerID="cri-o://fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658" gracePeriod=2 Jan 27 22:01:34 crc kubenswrapper[4858]: I0127 22:01:34.818021 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:34 crc kubenswrapper[4858]: I0127 22:01:34.990983 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-utilities\") pod \"71199c36-5fcb-4c1d-b225-1b52713f813f\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " Jan 27 22:01:34 crc kubenswrapper[4858]: I0127 22:01:34.991481 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-catalog-content\") pod \"71199c36-5fcb-4c1d-b225-1b52713f813f\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " Jan 27 22:01:34 crc kubenswrapper[4858]: I0127 22:01:34.991612 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl4bp\" (UniqueName: \"kubernetes.io/projected/71199c36-5fcb-4c1d-b225-1b52713f813f-kube-api-access-jl4bp\") pod \"71199c36-5fcb-4c1d-b225-1b52713f813f\" (UID: \"71199c36-5fcb-4c1d-b225-1b52713f813f\") " Jan 27 22:01:34 crc kubenswrapper[4858]: I0127 22:01:34.991938 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-utilities" (OuterVolumeSpecName: "utilities") pod "71199c36-5fcb-4c1d-b225-1b52713f813f" (UID: "71199c36-5fcb-4c1d-b225-1b52713f813f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:34 crc kubenswrapper[4858]: I0127 22:01:34.992282 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.000037 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71199c36-5fcb-4c1d-b225-1b52713f813f-kube-api-access-jl4bp" (OuterVolumeSpecName: "kube-api-access-jl4bp") pod "71199c36-5fcb-4c1d-b225-1b52713f813f" (UID: "71199c36-5fcb-4c1d-b225-1b52713f813f"). InnerVolumeSpecName "kube-api-access-jl4bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.053663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71199c36-5fcb-4c1d-b225-1b52713f813f" (UID: "71199c36-5fcb-4c1d-b225-1b52713f813f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.094775 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71199c36-5fcb-4c1d-b225-1b52713f813f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.094814 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl4bp\" (UniqueName: \"kubernetes.io/projected/71199c36-5fcb-4c1d-b225-1b52713f813f-kube-api-access-jl4bp\") on node \"crc\" DevicePath \"\"" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.365372 4858 generic.go:334] "Generic (PLEG): container finished" podID="71199c36-5fcb-4c1d-b225-1b52713f813f" containerID="fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658" exitCode=0 Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.365417 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmd8t" event={"ID":"71199c36-5fcb-4c1d-b225-1b52713f813f","Type":"ContainerDied","Data":"fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658"} Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.365443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fmd8t" event={"ID":"71199c36-5fcb-4c1d-b225-1b52713f813f","Type":"ContainerDied","Data":"e70815d64cb5ba56b81c89ca31e2f735e1355e23c9eba99cc56ac5330487373c"} Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.365462 4858 scope.go:117] "RemoveContainer" containerID="fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.365621 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fmd8t" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.400638 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fmd8t"] Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.406604 4858 scope.go:117] "RemoveContainer" containerID="6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.412010 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fmd8t"] Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.428394 4858 scope.go:117] "RemoveContainer" containerID="1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.488425 4858 scope.go:117] "RemoveContainer" containerID="fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658" Jan 27 22:01:35 crc kubenswrapper[4858]: E0127 22:01:35.488843 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658\": container with ID starting with fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658 not found: ID does not exist" containerID="fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.488894 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658"} err="failed to get container status \"fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658\": rpc error: code = NotFound desc = could not find container \"fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658\": container with ID starting with fb64b846a86c6d6b82c3c87462d676644530b47ac21263fe55707bcb1652c658 not found: ID does not exist" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.488913 4858 scope.go:117] "RemoveContainer" containerID="6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881" Jan 27 22:01:35 crc kubenswrapper[4858]: E0127 22:01:35.489215 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881\": container with ID starting with 6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881 not found: ID does not exist" containerID="6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.489235 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881"} err="failed to get container status \"6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881\": rpc error: code = NotFound desc = could not find container \"6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881\": container with ID starting with 6fcf132e5a76da98dc3d1d50895a7c5fa906f1883663c90de905508fa4f98881 not found: ID does not exist" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.489266 4858 scope.go:117] "RemoveContainer" containerID="1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898" Jan 27 22:01:35 crc kubenswrapper[4858]: E0127 22:01:35.489489 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898\": container with ID starting with 1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898 not found: ID does not exist" containerID="1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898" Jan 27 22:01:35 crc kubenswrapper[4858]: I0127 22:01:35.489519 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898"} err="failed to get container status \"1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898\": rpc error: code = NotFound desc = could not find container \"1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898\": container with ID starting with 1ecb1150f7ecd988da8189e69d7842574e38b3fa98221fbe32ca819689cfd898 not found: ID does not exist" Jan 27 22:01:36 crc kubenswrapper[4858]: I0127 22:01:36.083508 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71199c36-5fcb-4c1d-b225-1b52713f813f" path="/var/lib/kubelet/pods/71199c36-5fcb-4c1d-b225-1b52713f813f/volumes" Jan 27 22:01:59 crc kubenswrapper[4858]: I0127 22:01:59.329081 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:01:59 crc kubenswrapper[4858]: I0127 22:01:59.329811 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.329320 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.329944 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.329990 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.330778 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee4cf6e67b2c44879724fafd0d4096cc7ffbcaa8daa108db195da1dc16df08b5"} pod="openshift-machine-config-operator/machine-config-daemon-psxnq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.330828 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" containerID="cri-o://ee4cf6e67b2c44879724fafd0d4096cc7ffbcaa8daa108db195da1dc16df08b5" gracePeriod=600 Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.980719 4858 generic.go:334] "Generic (PLEG): container finished" podID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerID="ee4cf6e67b2c44879724fafd0d4096cc7ffbcaa8daa108db195da1dc16df08b5" exitCode=0 Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.980801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerDied","Data":"ee4cf6e67b2c44879724fafd0d4096cc7ffbcaa8daa108db195da1dc16df08b5"} Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.981039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" event={"ID":"50837e4c-bd24-4b62-b1e7-b586e702bd40","Type":"ContainerStarted","Data":"aa2e1f7c64f001152f6161c0c1e483bff02cec48e22c945c5114c7c095032662"} Jan 27 22:02:29 crc kubenswrapper[4858]: I0127 22:02:29.981068 4858 scope.go:117] "RemoveContainer" containerID="cb4f1466eb31ad617148c9b086fcd1122cc72ea3abe3863e610473c1fad022a5" Jan 27 22:04:29 crc kubenswrapper[4858]: I0127 22:04:29.328971 4858 patch_prober.go:28] interesting pod/machine-config-daemon-psxnq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 22:04:29 crc kubenswrapper[4858]: I0127 22:04:29.329505 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-psxnq" podUID="50837e4c-bd24-4b62-b1e7-b586e702bd40" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"